A dubious argument for panpsychism

At Aeon, philosopher Philip Goff argues for panpsychism:

Panpsychism is crazy, but it’s also most probably true

It’s a short essay that only takes a couple of minutes to read.

Goff’s argument is pretty weak, in my opinion, and it boils down to an appeal to Occam’s Razor:

I maintain that there is a powerful simplicity argument in favour of panpsychism…

In fact, the only thing we know about the intrinsic nature of matter is that some of it – the stuff in brains – involves experience… The theoretical imperative to form as simple and unified a view as is consistent with the data leads us quite straightforwardly in the direction of panpsychism.

…the brains of organisms are coloured in with experience. How to colour in the rest? The most elegant, simple, sensible option is to colour in the rest of the world with the same pen.

Panpsychism is crazy. But it is also highly likely to be true.

I think Goff is misapplying Occam’s Razor here, but I’ll save my detailed criticisms for the comment thread.

656 thoughts on “A dubious argument for panpsychism

  1. Alan Fox: I was wondering where Dennett “pretends”.

    You will keep wondering forever. It takes knowledge of character to know it when you see it.

  2. Erik: You will keep wondering forever. It takes knowledge of character to know it when you see it.

    I mean I’d like to see the claim Dennett is “pretending” supported. Where in your 1 hour video is Dennett pretending? Or is it all pretence?

  3. Alan Fox,

    Oh, my bad. I forgot for a while what a hopeless reductionist you are.

    The video shows that Dennett pretends that his talk about memes and deepeties and such yield a scientific theory of consciousness that leaves nothing unresolved.

    If you want to see him just pretending, perhaps like a clown or singing Great Pretender, I have no material support for that.

  4. Erik:
    Alan Fox,

    Oh, my bad. I forgot for a while what a hopeless reductionist you are.

    The video shows that Dennett pretends that his talk about memes and deepeties and such yield a scientific theory of consciousness that leaves nothing unresolved.

    If you want to see him just pretending, perhaps like a clown or singing Great Pretender, I have no material support for that.

    Alan Fox is being a good literalist.

    What’s the evidence that Dennett was pretending?

    Well, that he’s spouting nonsense that was supposed to be scientific. Oh, but maybe he believes in it, just as he believes that consciousness is an illusion.

    Distinction without a difference, that only the literalist would care about. And don’t forget, Dennett’s a Famous Wise Man who gets to spout nonsense without being called on it by the many non-skeptics (many of whom claim to be skeptics) who tremble and fear before Famous Wise Men.

    Glen Davidson

  5. GlenDavidson: Why are you unable to think of consciousness and/or qualia in any way other than your spectator view of it? It’s certainly not how I see it, which, by the way, you already misconstrued badly in the past.

    I wasn’t even trying to address how you see consciousness, and I never said or implied that the spectator theory of knowledge was the only way to think about consciousness. I was only pointing out that the hard problem of consciousness arises because of specific theoretical moves that seem ‘natural’ to the philosophers who invented that problem: Thomas Nagel, Saul Kripke, and especially David Chalmers. If you don’t want to talk about what they’re talking about, that’s fine with me.

    I only brought it up because it seemed to me that one would have to already be convinced that The Problem of Consciousness is a Very Hard Problem indeed in order to think that panpsychism is a solution to it. I posted the exchange between Dennett and Strawson for the same reason — because Strawson has become convinced that panpsychism is a viable solution to the hard problem of consciousness.

    keiths: Also, it’s interesting that you regard functionalism as “deeply problematic.” Dennett is a functionalist, yet you seem to think he’s on the right track.

    Why is that interesting? Lots of philosophers and scientists make important steps and also missteps. Dennett has been an influence on me, and so have been philosophers he’s had productive disagreements with. At the end of the day I might even disagree with him more than I agree with him, but I’m always better for having taken the time to think through his arguments.

    keiths: That difficulty is there whether or not functionalism is true.

    I’m not entirely disagreeing, but how we describe the difficulty does depend on whatever our background theory of cognition is. I mean, if we’re trying to articulate an intuition that somethings can’t be explained, then how we articulate that intuition will depend on what it is that we think we can explain!

    I don’t think that rejecting functionalism for a much more embodied/embedded or enactive view eliminates the conceptual and epistemic ‘gap’ between first-personal, phenomenological description of lived experience and the third-personal scientific explanation of causal regularities. But it does change how we conceptualize that ‘gap’.

  6. vjtorley: I would say more, but I need to get some shut-eye, as I have to rise and shine in four hours.

    And bring out the glory glory!

  7. KN:

    I don’t think that rejecting functionalism for a much more embodied/embedded or enactive view eliminates the conceptual and epistemic ‘gap’ between first-personal, phenomenological description of lived experience and the third-personal scientific explanation of causal regularities.

    Yet you wrote this yesterday:

    I would add that the easy/hard problem distinction relies on assuming functionalism about cognition, which is deeply problematic in some ways.

    You’ve completely contradicted yourself within a space of hours.

  8. keiths:
    I wonder:What, in Goff’s view, are electrons actually conscious of?

    The essay gives no hints.

    If he’s a Spinozist, they’re probably ideas of (perhaps changes in) their own “bodies.”

  9. Erik: Seriously, who is guiding who? Why would an organism need guiding and why would a neurocomputational system take up the task? Is a neurocomputational system something other than organism? Is an organism minus neurocomputational system still an organism? And a neurocomputational system minus organism still a neurocomputational system?

    I think those are good questions.

  10. keiths: You’ve completely contradicted yourself within a space of hours.

    Only if I thought that the hard problem of consciousness, in Chalmers’ sense, was the best or only way of thinking about the ‘explanatory gap’ (which is not really explanatory, but ok) between science and phenomenology. I don’t think that. In fact I think that the impossibility of giving a functionalist explanation of qualia is a completely mistaken approach to thinking about the relation between cognitive science and phenomenology.

  11. walto: I think those are good questions.

    Really? Would you say the same if we were were talking about an insulin-producing system instead of a neurocomputational system?

  12. GlenDavidson: Alan Fox is being a good literalist.

    I do tend to take the words that people use to describe their ideas, thoughts and opinions literally.

    What’s the evidence that Dennett was pretending?

    Well, that he’s spouting nonsense that was supposed to be scientific. Oh, but maybe he believes in it, just as he believes that consciousness is an illusion.

    Now I read that as confirming your dismissal of Dennett’s views (that consciousness is a flawed concept) as pretense as the same sort of remark as “Oh, atheists aren’t really disbelievers – they just hate God”.

    Distinction without a difference, that only the literalist would care about. And don’t forget, Dennett’s a Famous Wise Man who gets to spout nonsense without being called on it by the many non-skeptics (many of whom claim to be skeptics) who tremble and fear before Famous Wise Men.

    So you don’t agree with Dennett on consciousness. Fair enough.

  13. Erik: The video shows that Dennett pretends that his talk about memes and deepeties and such yield a scientific theory of consciousness that leaves nothing unresolved.

    Not that I’ve watched it all but I didn’t get that impression from what I did watch. You seem to be confirming that you claim the whole video is a pretense on Dennett’s behalf.

    There seems to be quite a lot more of Dennett pretending. Here’s Dennett and a Ted Talk “The Illusion of Consciousness”.

  14. Btw, when Dennett says that consciousness is an illusion, he means something very specific: that consciousness as understood within the manifest image is an evolved user-illusion. The denial is not that we are conscious (despite the frankly malicious attacks launched against him by philosophers like Nagel and Strawson) but that the manifest image is a reliable guide to what consciousness is.

  15. Kantian Naturalist: …consciousness as understood within the manifest image is an evolved user-illusion.

    That we can understand ourselves less well than we imagine and that a third-party (experimental) approach can demonstrate the inadequacy of the first-person approach.

  16. Erik:
    Alan Fox,

    Yes, I have already watched a bunch of those. Thank you very much.

    I watched that one I linked to, thought I must have seen it before, then realized I’d read it in From Bacteria to Bach and Back

  17. Kantian Naturalist:
    Btw, when Dennett says that consciousness is an illusion, he means something very specific: that consciousness as understood within the manifest image is an evolved user-illusion. The denial is not that we are conscious (despite the frankly malicious attacks launched against him by philosophers like Nagel and Strawson) but that the manifest image is a reliable guide to what consciousness is.

    Dennett also says that free will (another user-illusion) is as real as colors, promises and euros.

    Is it not a pretentious or deliberately provocative way to put it that all those things are illusions? First, it should be pretty obvious that you can’t deny that those things exist, whereas Dennett should be among the first to know that physicalists/naturalists (like Alan Fox) tend to equate illusion with non-existence.

    Second, free will cannot be institutionally reformed, established or abolished like euros can, cannot be given and broken like promises can, and cannot be associated with a given physical wavelength like colors can. So where’s the point of analogy? Looks like another pretentious rhetorical move.

  18. KN:

    Btw, when Dennett says that consciousness is an illusion, he means something very specific: that consciousness as understood within the manifest image is an evolved user-illusion. The denial is not that we are conscious (despite the frankly malicious attacks launched against him by philosophers like Nagel and Strawson) but that the manifest image is a reliable guide to what consciousness is.

    Yes. Alan’s position is bizarre, and certainly not something that Dennett would support.

  19. Kantian Naturalist: Really? Would you say the same if we were were talking about an insulin-producing system instead of a neurocomputational system?

    I think so, yeah. I think it’d be weird to separate the organism from that system.

  20. Alan Fox: Now I read that as confirming your dismissal of Dennett’s views (that consciousness is a flawed concept) as pretense as the same sort of remark as “Oh, atheists aren’t really disbelievers – they just hate God”.

    Same literalistic nonsense.

    Well, why don’t you just go on believing that. It would be too much to expect you to admit that you’re being literalistic where people are writing loosely and metaphorically. Normally, in other words.

    Glen Davidson

  21. KN,

    Only if I thought that the hard problem of consciousness, in Chalmers’ sense, was the best or only way of thinking about the ‘explanatory gap’ (which is not really explanatory, but ok) between science and phenomenology.

    The Hard Problem is the problem of bridging the explanatory gap. So when you wrote this…

    I don’t think that rejecting functionalism for a much more embodied/embedded or enactive view eliminates the conceptual and epistemic ‘gap’ between first-personal, phenomenological description of lived experience and the third-personal scientific explanation of causal regularities.

    …you were in fact contradicting your earlier statement:

    I would add that the easy/hard problem distinction relies on assuming functionalism about cognition, which is deeply problematic in some ways.

    In any case, your earlier statement is incorrect, as I explained earlier. The explanatory gap persists whether or not functionalism is true, so the easy/hard distinction does not rely on assuming functionalism.

    Also, you spoke of “rejecting functionalism for a much more embodied/embedded or enactive view”. That indicates some additional confusion on your part about functionalism, which is not incompatible with “embodied/embedded or enactive views.”

  22. Certain things in the world have subjective mentality. That is for sure. Does consciousness only pop into the universe suddenly and then for no reason when brains reach a certain level of complexity? Such a notion seems quite absurd.

    If subjective conscious entities are indeed natural elements within the universe we can try to find where they exist only if they contribute some unique causal agency.

    I speculated in the following paper where we may locate the natural conscious entities. https://philpapers.org/rec/SLESA

  23. keiths: The Hard Problem is the problem of bridging the explanatory gap

    I simply disagree. I take the Hard Problem of Consciousness to be what Chalmers said it was when he coined the term, which he couches precisely in terms of the impossibility of explaining qualia in terms of cognitive functions. If you want to use the term “the Hard Problem of Consciousness” to mean “the explanatory gap,” then OK.

    The Hard Problem of Consciousness is not the puzzle of “how we get from cognitive science to phenomenology?” (Ray Jackendorff calls this “the mind-mind problem”: the relation between the computational mind and the phenomenological mind.) It’s a specific position: that it is impossible to explain phenomenal consciousness in computational, cognitive-scientific terms. Conversely, if we had a complete and comprehensive cognitive neuroscience, there would not be any explanation of phenomenal consciousness. This claim — that a cognitive-scientific explanation of phenomenal consciousness is impossible — hinges on Chalmers’s argument for how we can infer possibility from conceivability. And that in turn depends on some technical issues in philosophy of language.

  24. keiths: Also, you spoke of “rejecting functionalism for a much more embodied/embedded or enactive view”. That indicates some additional confusion on your part about functionalism, which is not incompatible with “embodied/embedded or enactive views.”

    It depends on the details. It’s probably fair to say that predictive processing is an embodied/embedded functionalism. But enactivism is widely construed as committed to anti-representationalism, and functionalism is a theory of mental states as representations. (Though it may be possible to resolve this disagreement at a theoretical level.)

  25. lorenzosleakes: Does consciousness only pop into the universe suddenly and then for no reason when brains reach a certain level of complexity? Such a notion seems quite absurd.

    Welcome to TSZ!

    Indeed. One of my objections to “consciousness” is that it often suggests a false dichotomy of having it or not. “Awareness” is a better way of thinking about cognitive abilities. There’s a continuum and it is less misleading to talk of levels of awareness.

  26. Alan Fox: Indeed. One of my objections to “consciousness” is that it often suggests a false dichotomy of having it or not. “Awareness” is a better way of thinking about cognitive abilities. There’s a continuum and it is less misleading to talk of levels of awareness.

    I agree that we should be talk about levels or degrees of awareness, but I’ll confess that I don’t share your intuition that the word “awareness” conveys gradations whereas the word “consciousness” does not.

  27. KN, to Alan:

    I agree that we should be talk about levels or degrees of awareness, but I’ll confess that I don’t share your intuition that the word “awareness” conveys gradations whereas the word “consciousness” does not.

    Nor I. It makes perfect sense to talk about someone “drifting in and out of consciousness”, for instance.

  28. Alan Fox,

    I found this line interesting from the description: “One function of this circuitry is to attribute awareness to others: to compute that person Y is aware of thing X. In Graziano’s theory, the machinery that attributes awareness to others also attributes it to oneself.”

    I think that’s really interesting and probably right, but with one crucial revision: I’d say that the machinery that attributes subjectivity to others also attributes it to oneself. We apply the intentional stance not just to others but to ourselves.

    A slightly better way of putting it would be that we learn how to navigate social environments by mastering the vocabulary of agency and propositional attitudes (e.g. beliefs, desires, wishes, thoughts), but we in doing so we apply this vocabulary to ourselves as well as to others. And that’s just what it is to be a competent intentional agent — the kind of being that understands itself and others in terms of the vocabulary of agency and propositional attitudes.

    But in light of that, it’s gotta be –as Dennett pretty much says — a category mistake to understand what’s happening in the brain in terms of what’s happening at the level of social dynamics. That’s a conflation of the personal level and the subpersonal level. When we do cognitive and affective neuroscience, we’re not going to be find any patterns of neuronal activity that map neatly onto psychological states, because the function of folk-psychological vocabulary is for navigating social spaces, not for disclosing what’s really going under the hood.

    I guess that puts me somewhere between Dennett and Churchland on a lot of these issues.

  29. KN:

    Only if I thought that the hard problem of consciousness, in Chalmers’ sense, was the best or only way of thinking about the ‘explanatory gap’

    keiths:

    The Hard Problem is the problem of bridging the explanatory gap.

    KN:

    I simply disagree.

    Come on, KN.

    From the Wikipedia article on the explanatory gap:

    In philosophy of mind and consciousness, the explanatory gap is the difficulty that physicalist theories have in explaining how physical properties give rise to the way things feel when they are experienced…

    The explanatory gap has vexed and intrigued philosophers and AI researchers alike for decades and caused considerable debate. Bridging this gap (that is, finding a satisfying mechanistic explanation for experience and qualia) is known as “the hard problem”.

    [emphasis added]

    From the Internet Encyclopedia of Philosophy:

    The hard problem of consciousness is the problem of explaining why any physical state is conscious rather than nonconscious…

    Joseph Levine argues that there is a special “explanatory gap” between consciousness and the physical (1983, 1993, 2001). The challenge of closing this explanatory gap is the hard problem.

    [emphasis added]

    Scholarpedia:

    The hard problem of consciousness (Chalmers 1995) is the problem of explaining the relationship between physical phenomena, such as brain processes, and experience (i.e., phenomenal consciousness, or mental states/events with phenomenal qualities or qualia). Why are physical processes ever accompanied by experience?

    From the Stanford Encyclopedia of Philosophy:

    Others may seem less tractable, especially the so-called “hard problem” (Chalmers 1995) which is more or less that of giving an intelligible account that lets us see in an intuitively satisfying way how phenomenal or “what it’s like” consciousness might arise from physical or neural processes in the brain.

    From Chalmers himself, in the original paper:

    The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience…

    Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does. If any problem qualifies as the problem of consciousness, it is this one.

  30. KN,

    …functionalism is a theory of mental states as representations.

    No, it isn’t. Functionalism is the idea that the functional role of a mental state is what matters, not the way that function is implemented or the substrate in which it is implemented.

    That’s the third misconception about functionalism that you’ve expressed in this thread. Why are you finding the concept so difficult to grasp?

  31. KN,

    If you want to work in philosophy of mind — and you’ve told us that you do — then you can’t just wing it. You’ve got to buckle down and learn the basic concepts, including things like compatibilism, the Hard Problem, and functionalism.

    There’s no shortcut.

  32. lorenzosleakes:

    I speculated in the following paper where we may locate the natural conscious entities. https://philpapers.org/rec/SLESA

    Hi, Lorenzo. Welcome to TSZ.

    In your paper, you attribute consciousness to elementary particles, such as electrons. What do you think electrons are conscious of, and how could we ever test that idea?

  33. keiths:
    KN,

    If you want to work in philosophy of mind — and you’ve told us that you do — then you can’t just wing it.You’ve got to buckle down and learn the basic concepts, including things like compatibilism, the Hard Problem, and functionalism.

    There’s no shortcut.

    I understand all this stuff far better than you do or ever will, but I’m sick and tired of how every conversation with you devolves into a dick-measuring competition.

  34. Kantian Naturalist:

    I understand all this stuff far better than you do or ever will, but I’m sick and tired of how every conversation with you devolves into a dick-measuring competition.

    How dare I accuse The Great KN of misunderstanding compatibilism, functionalism and the Hard Problem!

    Get over yourself, KN. You’ve misunderstood some basic concepts in the philosophy of mind, and you’ll need to rectify that if you intend to work in the field.

    Try to respond constructively instead of shooting the messenger.

  35. keiths: No, it isn’t. Functionalism is the idea that the functional role of a mental state is what matters, not the way that function is implemented or the substrate in which it is implemented.

    Without attempting to judge whether or not functionalism is true or not, the implementation of the concept reminds me of FSCIO. It’s one of those map/territory things.

    What matters is not what we think about substrates, but whether, in fact, we can implement mental states (or more specifically, the behaviors that lead us to say there is a mental state) in non-biological substrates. Show me the beef.

    Produce an example, and the philosophical debate becomes superfluous. Actually, it is superfluous anyway.

  36. petrushka:

    Without attempting to judge whether or not functionalism is true or not, the implementation of the concept reminds me of FSCIO. It’s one of those map/territory things.

    What matters is not what we think about substrates, but whether, in fact, we can implement mental states (or more specifically, the behaviors that lead us to say there is a mental state) in non-biological substrates.

    If you’re skeptical of that, then you’re not a functionalist.

    Produce an example, and the philosophical debate becomes superfluous.

    It wouldn’t, actually. To borrow your earlier example, consider an android on a par with Star Trek’s “Data”. If such an android were ever constructed, people would be saying “Impressive, but is he really conscious? Are his “mental states” really mental states? Or is he just an elaborate simulation of consciousness and thinking?”

  37. keiths: How dare I accuse The Great KN of misunderstanding compatibilism, functionalism and the Hard Problem!

    Get over yourself, KN. You’ve misunderstood some basic concepts in the philosophy of mind, and you’ll need to rectify that if you intend to work in the field.

    Try to respond constructively instead of shooting the messenger.

    My “misunderstandings”, as you call them, consist entirely of the fact that I don’t always use words in ways that are consistent with whatever education you’ve given yourself by reading some encyclopedia articles on the Internet.

  38. KN,

    My “misunderstandings”, as you call them, consist entirely of the fact that I don’t always use words in ways that are consistent with a light skimming of a few Internet encyclopedia articles.

    Your misunderstandings consist of not understanding the concepts.

    Learn them. They are important.

  39. keiths: It wouldn’t, actually. To borrow your earlier example, consider an android on a par with Star Trek’s “Data”. If such an android were ever constructed, people would be saying “Impressive, but is he really conscious? Are his “mental states” really mental states? Or is he just an elaborate simulation of consciousness and thinkin

    That’s exactly what I’m addressing. That kind of question belongs to an obsolete era of philosophy and theology. The questions are unanswerable and therefore unproductive.

    Now, something like Star Trek Data will not pop fully formed from the head of Zeus or IBM. It will evolve, and we will get the same kind of useless questions that are asked of cats and dogs and apes.

    Quite frankly, I cannot be certain that the posters at this site are not bots. I judge them not to be, but that is based entirely on behavior.

  40. petrushka,

    The questions are unanswerable…

    That’s a bit premature. Philosophers and scientists don’t give up so easily.

    …and therefore unproductive.

    You can learn a lot by thinking carefully about a question, even if you don’t end up answering it.

  41. petrushka: What matters is not what we think about substrates, but whether, in fact, we can implement mental states (or more specifically, the behaviors that lead us to say there is a mental state) in non-biological substrates.

    I like that switch from “mental state” to “behaviors that lead us to say there is a mental state”.

    The expression “mental state” should be removed from the vocabulary used by philosophers. Talk of mental states muddles the issues.

  42. Kantian Naturalist: I understand all this stuff far better than you do or ever will, but I’m sick and tired of how every conversation with you devolves into a dick-measuring competition.

    I would hazard the claim that there’s no question who’s the bigger dick, but there’s a (slim) chance a moderator might disapprove.

  43. Neil Rickert: I like that switch from “mental state” to “behaviors that lead us to say there is a mental state”.

    The expression “mental state” should be removed from the vocabulary used by philosophers. Talk of mental states muddles the issues.

    It depends on the character of the talk — confusing talk about mental states muddles the issues, clear talk about mental states does not. I don’t think that one is implicitly committed to dualism simply by using the term “mental state,” and forbidding use of the term isn’t going to lead to any progress in philosophy or psychology.

  44. I would hazard the claim that there’s no question who’s the bigger dick, but there’s a (slim) chance a moderator might disapprove.

    …says walto, who often blows a gasket, like KN just did, when his mistakes are pointed out to him.

  45. Neil:

    The expression “mental state” should be removed from the vocabulary used by philosophers. Talk of mental states muddles the issues.

    KN:

    It depends on the character of the talk — confusing talk about mental states muddles the issues, clear talk about mental states does not.

    Right, and forbidding the use of the term “mental state” would impede — not improve — discussion. It would be a return to behaviorism, essentially.

  46. keiths: …says walto, who often blows a gasket, like KN just did, when his mistakes are pointed out to him.

    Who’d you think I meant?

  47. Kantian Naturalist: I agree that we should be talk about levels or degrees of awareness, but I’ll confess that I don’t share your intuition that the word “awareness” conveys gradations whereas the word “consciousness” does not.

    Let’s see.

    Consciousness has a perfectly good medical usage. Deeply unconscious, barely conscious etc. But this is not what I am suggesting as a phenomenon shared across living organisms. Consciousness doesn’t really have a generally accepted meaning when talking about other species. Are dogs conscious? Cats? Bacteria? I think I can argue that there is a very primitive level of awareness when a flagellate bacterium employs run-and-tumble strategy to maintain itself in optimal nutrient concentration.

Leave a Reply