A dubious argument for panpsychism

At Aeon, philosopher Philip Goff argues for panpsychism:

Panpsychism is crazy, but it’s also most probably true

It’s a short essay that only takes a couple of minutes to read.

Goff’s argument is pretty weak, in my opinion, and it boils down to an appeal to Occam’s Razor:

I maintain that there is a powerful simplicity argument in favour of panpsychism…

In fact, the only thing we know about the intrinsic nature of matter is that some of it – the stuff in brains – involves experience… The theoretical imperative to form as simple and unified a view as is consistent with the data leads us quite straightforwardly in the direction of panpsychism.

…the brains of organisms are coloured in with experience. How to colour in the rest? The most elegant, simple, sensible option is to colour in the rest of the world with the same pen.

Panpsychism is crazy. But it is also highly likely to be true.

I think Goff is misapplying Occam’s Razor here, but I’ll save my detailed criticisms for the comment thread.

656 thoughts on “A dubious argument for panpsychism

  1. Alan Fox: I think I can argue that there is a very primitive level of awareness when a flagellate bacterium employs run-and-tumble strategy to maintain itself in optimal nutrient concentration.

    I’d agree with that, thus demystifying consciousness. That won’t stop philosophers from asking how does it feel to be an E coli on a nutrient gradient, therefore “hard problem.”

  2. Alan Fox: Consciousness doesn’t really have a generally accepted meaning when talking about other species. Are dogs conscious? Cats? Bacteria? I think I can argue that there is a very primitive level of awareness when a flagellate bacterium employs run-and-tumble strategy to maintain itself in optimal nutrient concentration.

    I’d be OK with that myself, as a first pass. The problem is that it only kicks the can down the road — then we have either an ‘explanatory gap’ between chemistry and biology, or we go with something like panpsychism after all.

  3. Kantian Naturalist: The problem is that it only kicks the can down the road…

    Oh sure.

    My intuition is humans will fail at explaining human cognitive abilities in terms of neuroscience and biochemistry. But it doesn’t stop us extrapolating from simpler examples. What I’m trying to convey is there is a continuum rather than a cutoff for awareness.

  4. Sort of on-topic, I see an exchange between Bob O’Hara and G Puccio at UD here. Gpuccio writes:

    Of course I would not say that consciousness is a function of the brain, but rather that it is expressed through the brain, at least in the human state.

    The brain-consciousness interface works in both directions: physical events in neurons become in some way subjective expereinces, and subjective experiences can initiate changes in brain events. The idea of an interface allows space for free will, that IMO is really unrenouncable.

    So, I am perfectly fine with the way you put it:

    “if you want to believe in a soul, then I would hope it’s clear that the soul has to interact with the material world in some way, and that, I presume, would have to be through the brain”

    That’s exactly my idea of an interface.

    I’d be impressed if G Puccio could suggest how this interface works, what parts of the human brain might be involved and whether the second law of thermodynamics is routinely violated.

  5. Kantian Naturalist: The problem is that it only kicks the can down the road — then we have either an ‘explanatory gap’ between chemistry and biology, or we go with something like panpsychism after all.

    No, this is just wrong. There isn’t an explanatory gap here between chemistry and biology.

    Explaining consciousness is not a biology problem. It isn’t even a scientific problem. It is a philosophy problem.

  6. Neil Rickert: Explaining consciousness is not a biology problem. It isn’t even a scientific problem. It is a philosophy problem.

    Why isn’t explaining consciousness a scientific problem?

  7. Alan Fox: My intuition is humans will fail at explaining human cognitive abilities in terms of neuroscience and biochemistry. But it doesn’t stop us extrapolating from simpler examples. What I’m trying to convey is there is a continuum rather than a cutoff for awareness.

    Interesting. My intuition is that explaining cognition in terms of neuroscience (plus other sciences) is coming along nicely, though there are serious problems still be solved. It’s consciousness that’s (supposedly) the problem.

    For what it may be worth, I think that a scientific explanation of intentionality is well within our grasp, and it lies pretty much where Sellars thought it did (with some caveats, tweaks, and revisions). But I have no idea what to say about consciousness.

  8. KN, to Alan:

    Kantian Naturalist: I agree that we should be talk about levels or degrees of awareness, but I’ll confess that I don’t share your intuition that the word “awareness” conveys gradations whereas the word “consciousness” does not.

    keiths:

    Nor I. It makes perfect sense to talk about someone “drifting in and out of consciousness”, for instance.

    Alan:

    Consciousness has a perfectly good medical usage. Deeply unconscious, barely conscious etc. But this is not what I am suggesting as a phenomenon shared across living organisms.

    Alan,

    Your argument was that “awareness” was a better term than “consciousness” because “awareness” suggests a continuum while “consciousness” suggests a dichotomy — you either have it or you don’t.

    But if an individual can experience degrees of consciousness, there’s no reason to think that consciousness can’t differ by degrees across species.

  9. Neil:

    Explaining consciousness is not a biology problem. It isn’t even a scientific problem. It is a philosophy problem.

    And it’s one that you, personally, claim to have solved:

    keiths, to petrushka:

    It [the problem of consciousness] ain’t gonna become tractable if people don’t work on it.

    Neil:

    I’ve been working on it.

    I’ve “solved” it.

    keiths:

    Is there anyone besides you who thinks you’ve “solved” the problem of consciousness?

    Neil:

    No.

    Too funny.

  10. Entropy:

    That won’t stop philosophers from asking how does it feel to be an E coli on a nutrient gradient, therefore “hard problem.”

    More accurately, philosophers will ask “Does it feel like anything to be an E coli on a nutrient gradient, and how can we tell?”

  11. keiths:
    Entropy:

    More accurately, philosophers will ask “Does it feel like anything to be an E coli on a nutrient gradient, and how can we tell?”

    That’d also make my point.

  12. Entropy:

    That won’t stop philosophers from asking how does it feel to be an E coli on a nutrient gradient, therefore “hard problem.”

    keiths:

    More accurately, philosophers will ask “Does it feel like anything to be an E coli on a nutrient gradient, and how can we tell?”

    Entropy:

    That’d also make my point.

    No, because if it doesn’t feel like anything to be that E coli bacterium, then the hard problem doesn’t arise in this context.

  13. Neil Rickert: Because it isn’t a problem due to a shortage of evidence.It’s a conceptual problem.

    What’s the conceptual problem, as you see it?

  14. Kantian Naturalist: What’s the conceptual problem, as you see it?

    We misconceive our relation to the natural world.

    We credit nature with most of the heavy lifting, and see ourselves as mainly passive participants. But we do most of the heavy lifting.

    For example, people talk of carving the world at its seams. But there are no seams. We carve the world in ways that will be useful to us. And then we say that there were seams where we did the carving. We created the seams with our activity of carving. Nature didn’t do it for us.

    Another example: people see facts as part of nature. But we create facts with our interactions with nature. Facts about time only exist because we invented clocks. Facts about distance only exist because we invented measuring systems. Here I use “we invented” broadly, to include what our cognitive system might have invented.

    The result: our interaction with the natural world involves a lot of work by us (including what our cognitive systems do). Conscious experience is the experience of doing all of that work. By contrast, an AI robot is not doing all of that work, which is why it won’t have conscious experience. Scientists and engineering labs do a lot of the work for the AI system. Sensors do part of the work. And the computation does very little of that work.

  15. keiths:
    No, because if it doesn’t feel like anything to be that E coli bacterium, then the hard problem doesn’t arise in this context.

    If you accept the idea that there’s a hard problem, then you are assuming that you cannot know if it feels like anything to be an E coli bacterium.

    The hard problem seems to be understanding the fundamental problems with the hard problem.

    ETA: someone else who finds the hard problem to be crap.

  16. Neil Rickert: By contrast, an AI robot is not doing all of that work, which is why it won’t have conscious experience. Scientists and engineering labs do a lot of the work for the AI system. Sensors do part of the work. And the computation does very little of that work.

    If and when robots or computers do what humans do, they will be conscious. There are, of course, many things that humans do that seem to be ignored in the AI debate. Humans and animals have emotional responses that are not at all superfluous. They are the hidden drivers of behavior and learning, including intellectual behavior. They are the makers of decisions before we become aware that a decision has been made.

    We (some of us) have empathy for animals because we look at their behavior and conclude that they feel pain and have emotions. It is this aspect of consciousness that engenders empathy and compassion. We do not worry about whether Deep Blue hurts when it loses a game, because nothing about its design looks like emotion or feeling of pleasure or pain.

    So the really interesting part of AI would be replicating the motivation system, Far more difficult, apparently, than replicating logic.

    The other day I watched a pair of bird coax their chicks into leaving the nest. I’ve seen this in films, but this took place less than ten feet from where I was sitting. the parents stood below the nest and chirped. the babies one by one came to the edge of the nest, hesitate for a minute or so, then flew off.

    These bird brains are about the size of a match head, and they do something that hasn’t been replicated. This is an evolved system. It is quite different from anything that an engineer would design.

  17. petrushka,

    Yes, I mostly agree with that. In particular:

    Humans and animals have emotional responses that are not at all superfluous. They are the hidden drivers of behavior and learning, including intellectual behavior.

    Yes, this is important.

    There is a tendency to downplay the role of emotions, and to emphasize cool clear logic. Logic is fine for solving formal problems. But logic alone cannot tell you how to create a suitable formal model for dealing with real world issues.

  18. Entropy:

    If you accept the idea that there’s a hard problem, then you are assuming that you cannot know if it feels like anything to be an E coli bacterium.

    No, to accept the hard problem merely means that you see the difficulty of explaining first-person phenomenology (or first-chimp, or first-cat, or first-bacterium phenomenology, if there even is such a thing) in terms of third-person physical descriptions. It doesn’t mean that you think the problem is unsolvable.

    And to solve the problem in the case of humans, for instance, won’t necessarily tell you whether it’s “like anything” to be a bacterium.

    But since you think none of this is problematic, then tell us: Is it “like something” to be an E coli bacterium on a nutrient gradient? How do you know?

  19. Neil,

    The result: our interaction with the natural world involves a lot of work by us (including what our cognitive systems do). Conscious experience is the experience of doing all of that work. By contrast, an AI robot is not doing all of that work, which is why it won’t have conscious experience.

    You haven’t addressed the actual issue, which is: why do some kinds or amounts of “work” produce conscious experience, when other kinds or amounts don’t? What, specifically, makes the difference?

  20. petrushka:

    If and when robots or computers do what humans do, they will be conscious.

    Then you’re a functionalist, after all.

  21. keiths,

    I see that my comment was incomplete, so let me rephrase: If you accept the idea that there’s a hard problem, then you are assuming that the question about whether it feels like anything to be an E coli bacterium is both valid and unanswerable.

    Also, someone else who finds the hard problem to be crap.

  22. Neil Rickert: We misconceive our relation to the natural world.

    We credit nature with most of the heavy lifting, and see ourselves as mainly passive participants. But we do most of the heavy lifting.

    For example, people talk of carving the world at its seams. But there are no seams. We carve the world in ways that will be useful to us. And then we say that there were seams where we did the carving. We created the seams with our activity of carving. Nature didn’t do it for us.

    Another example: people see facts as part of nature. But we create facts with our interactions with nature. Facts about time only exist because we invented clocks. Facts about distance only exist because we invented measuring systems. Here I use “we invented” broadly, to include what our cognitive system might have invented.

    The result:our interaction with the natural world involves a lot of work by us (including what our cognitive systems do). Conscious experience is the experience of doing all of that work. By contrast, an AI robot is not doing all of that work, which is why it won’t have conscious experience. Scientists and engineering labs do a lot of the work for the AI system. Sensors do part of the work. And the computation does very little of that work.

    I like all of that as crucial for an adequate theory of cognition, and my reservations are relatively minor against a background of agreement. But I don’t see what that has to do with what you said earlier, about the conceptual problem of consciousness.

  23. keiths:
    petrushka:
    Then you’re a functionalist, after all.

    I don’t see any value added by labels. But if you must, I think I’m more nearly a behaviorist. There was a time when Behaviorists asserted that there was no point is postulating anything “inside” a person or animal. I think that’s a bit obsolete. We know quite a bit about what’s going on in brains, and we are making slow but steady progress in emulating brains.

    Technology and science slowly erode the need to make abstractions and reify them.

    But I am agnostic about whether the Star Trek Data is possible (or more precisely, whether it will happen). I’m skeptical not because I think there is something magic about biological brains, but because I’m not convinced the project makes any commercial sense, and because I think we value computers precisely because they can be guilt free slaves.

  24. Kantian Naturalist: But I don’t see what that has to do with what you said earlier, about the conceptual problem of consciousness.

    You possibly assumed that I was suggesting a problem with the concept of consciousness. While there are problems with that, my comment about a conceptual problems was about other concepts, such as: fact, object, information.

  25. Entropy: Also, someone else who finds the hard problem to be crap.

    Thanks for the link. Pigliucci makes a lot of sense in that article.

  26. Neil Rickert: There is a tendency to downplay the role of emotions…

    Indeed. If innate behaviours are heritable (as they must be) then the obvious (to me, anyway) avenue to explore is how emotions are produced, controlled and modified by hormones and pheromones that affect mood, responses to stimuli and so on.

  27. I see Barry Arrington is making my point for me regarding the vagueness and subjectivity of “consciousness”.

    …even if we were able to someday hundreds of years from now come up with a very very clever machine (I have in mind “Commander Data” of Star Trek Next Generation fame), we would still never know with certainty the machine is conscious.

    here

  28. At this point I’m tempted to run a Wittgensteinian argument against all this Chalmers/Nagel stuff: I don’t know that I’m conscious because I am certain of it.

  29. petrushka:

    If and when robots or computers do what humans do, they will be conscious.

    keiths:
    petrushka:

    Then you’re a functionalist, after all.

    Sounds like Skinnerian behaviorism to me.

  30. petrushka: I don’t see any value added by labels. But if you must, I think I’m more nearly a behaviorist. There was a time when Behaviorists asserted that there was no point is postulating anything “inside” a person or animal. I think that’s a bit obsolete. We know quite a bit about what’s going on in brains, and we are making slow but steady progress in emulating brains.

    Ooops. Didn’t see this before I posted a second ago. There’s a long-term, very devoted Skinnerian in my yahoo philosophy group. The rest of us of know he’s smart, but we sometimes have a hard time understanding him.

  31. I would say there are parallels between Skinner and Darwin. Both of them noted that consequences modify the structure of populations via selection. Both labored without any knowledge of the underlying mechanism. Both were “black box” theorists, and both produced incomplete theories.

    I believe there is not and cannot be a complete theory of biology or of psychology. There is no grammar that can produce new genome sequences with predictable effects on reproductive success. And there is no grammar of neural structures that can predict the behavior of brains. It’s engineering by cut and try.

    Until this conjecture is disproved there can be no design in biology or psychology or AI. That is why my challenge to ID proponents is: show me. Design something. Copy and paste doesn’t count. Show me a new sequence and tell me what it does before you implement it. Prove that design is possible.

  32. Kantian Naturalist:
    At this point I’m tempted to run a Wittgensteinian argument against all this Chalmers/Nagel stuff: I don’t know that I’m conscious because I am certain of it.

    One knows one is conscious because we experience it whether we are or are not certain of it.

    Or we don’t experience it, have varying levels of consciousness, and end up with altered and partial consciousness–such as in dreams, when some brain areas aren’t operating well, and others work well enough.

    Being in denial of consciousness as a condition rather different from unconsciousness certainly doesn’t count as evidence against.

    Glen Davidson

  33. I “grew up” at the time Skinner and Chomsky were warring over language. I would say that both of them won and lost. Chomsky was right that there are inborn language faculties, and wrong in asserting that there can be a generative grammar of natural language.

    Only formal statements are grammatical, and they account for a minority of verbal behavior. The rest of our jabber is extremely context dependent and cannot be analyzed outside its context.

  34. GlenDavidson: Being in denial of consciousness as a condition rather different from unconsciousness certainly doesn’t count as evidence against.

    I’d still like to have some idea what people generally mean when they talk about consciousness.

  35. Alan Fox: I’d still like to have some idea what people generally mean when they talk about consciousness.

    I tried.

    We acknowledge that there is a continuum of awareness or attentiveness. We see that some animals are aware of their own bodies. They groom themselves. They may use visual feedback from the position of their limbs while attempting some activity.

    It is a difference in degree rather than a difference in kind to be aware of internal states. Freud is generally credited with being the first to write extensively about the fact that there are important internal states that we cannot observe or become aware of. There is evidence that some animals rehearse actions internally.

    Language seems to be a game changer, but it also exists on a continuum. If we rehearse talking, we are thinking. Rehearsing any action internally could be called thinking, but language amplifies the possibilities of thinking about long and complex sequences.

  36. Alan Fox: I’d still like to have some idea what people generally mean when they talk about consciousness.

    I’m not sure anybody really knows what “consciousness” is supposed to mean.

    For me, the most important aspect is our ability to think. Whatever people mean by consciousness, it would be pretty empty without thinking.

    I’d say that all mammals are capable of thinking. But, without language it would only be relatively shallow thinking.

  37. Hi Alan,

    Re the meaning of “consciousness,” you might like to have a look at the discussion in my thesis, especially pp. 77 to 111. I suspect that when people talk about consciousness, they usually have phenomenal consciousness in mind.

  38. vjtorley:
    Hi Alan,
    Re the meaning of “consciousness,” you might like to have a look at the discussion in my thesis, especially pp. 77 to 111. I suspect that when people talk about consciousness, they usually have phenomenal consciousness in mind.

    And for millennia, people talked about tree spirits.

    Talking about something is not the same thing as being useful or productive.

  39. Alan Fox: I’d still like to have some idea what people generally mean when they talk about consciousness.

    The effect of self reference.

  40. Alan:

    I’d still like to have some idea what people generally mean when they talk about consciousness.

    newton:

    The effect of self reference.

    Do you think that self-driving cars are conscious? They model themselves and the situations they find themselves in.

  41. keiths: Do you think that self-driving cars are conscious? They model themselves and the situations they find themselves in.

    Do they know they are modeling?

  42. newton:

    Do they know they are modeling?

    Who cares? Your criterion was “self-reference”, not “knowledge of self-reference”.

    I’m just pointing out the implications.

  43. Entropy:

    I see that my comment was incomplete, so let me rephrase: If you accept the idea that there’s a hard problem, then you are assuming that the question about whether it feels like anything to be an E coli bacterium is both valid and unanswerable.

    Again, no, as I already explained:

    No, to accept the hard problem merely means that you see the difficulty of explaining first-person phenomenology (or first-chimp, or first-cat, or first-bacterium phenomenology, if there even is such a thing) in terms of third-person physical descriptions. It doesn’t mean that you think the problem is unsolvable [or unanswerable, to use your word.]

    Are you hinting that you think the question is invalid? If so, why?

    I would say, with no hesitation, that it feels like something to be me. I would also say, with confidence, that it doesn’t feel like anything to be a doorknob.

    Do you think the corresponding questions are invalid? If so, why? If not, then how would you answer them? And how would you answer the corresponding question about the E coli bacterium on a nutrient gradient?

  44. Alan:

    I’d still like to have some idea what people generally mean when they talk about consciousness.

    Why not read about it? There’s a ton of information available on the Web.

    For example, at SEP:

    Concepts of Consciousness

  45. Neil:

    I’m not sure anybody really knows what “consciousness” is supposed to mean.

    Same question as for Alan: Why not read about it?

  46. petrushka:

    If and when robots or computers do what humans do, they will be conscious.

    keiths:

    Then you’re a functionalist, after all.

    petrushka:

    I don’t see any value added by labels.

    Labels are extremely valuable, and in fact you just used one. The word “label” is itself a label, and a useful one.

    But if you must, I think I’m more nearly a behaviorist.

    Your statement is a perfect match for functionalism. You’re saying that consciousness is independent of the substrate and independent of the implementation. All that matters is the function, as expressed in behavior:

    If and when robots or computers do what humans do, they will be conscious.

  47. keiths:
    Alan:

    Why not read about it?There’s a ton of information available on the Web.

    For example, at SEP:

    That article neatly illustrates the vagueness of the concept.

Leave a Reply