A dubious argument for panpsychism

At Aeon, philosopher Philip Goff argues for panpsychism:

Panpsychism is crazy, but it’s also most probably true

It’s a short essay that only takes a couple of minutes to read.

Goff’s argument is pretty weak, in my opinion, and it boils down to an appeal to Occam’s Razor:

I maintain that there is a powerful simplicity argument in favour of panpsychism…

In fact, the only thing we know about the intrinsic nature of matter is that some of it – the stuff in brains – involves experience… The theoretical imperative to form as simple and unified a view as is consistent with the data leads us quite straightforwardly in the direction of panpsychism.

…the brains of organisms are coloured in with experience. How to colour in the rest? The most elegant, simple, sensible option is to colour in the rest of the world with the same pen.

Panpsychism is crazy. But it is also highly likely to be true.

I think Goff is misapplying Occam’s Razor here, but I’ll save my detailed criticisms for the comment thread.

656 thoughts on “A dubious argument for panpsychism

  1. Alan:

    That article neatly illustrates the vagueness of the concept.

    It isn’t a single concept. Hence the title “Concepts of Consciousness”.

  2. Neil,

    Another example: people see facts as part of nature. But we create facts with our interactions with nature. Facts about time only exist because we invented clocks. Facts about distance only exist because we invented measuring systems.

    This is true only if you construe “facts” as propositions or something similar. In that case, then of course someone has to create them.

    But “fact” can also refer to a state of affairs — for example, that stars exist — in which case it is independent of us.

  3. vjtorley:
    Hi Alan,

    Re the meaning of “consciousness,” you might like to have a look at the discussion in my thesis, especially pp. 77 to 111. I suspect that when people talk about consciousness, they usually have phenomenal consciousness in mind

    I see parallels in the usage of “life”. On its own, not a very useful word. I’ll have a look when I get more time. Thanks for the link.

  4. Alan:

    That is my point, indeed!

    Lots of terms have more than one definition, but that’s a silly reason to throw up your hands. Capable readers know how to use contextual clues to narrow down the possible meanings, and good authors will supply those clues or make it explicit.

    Chalmers, for instance, identifies exactly the type of consciousness he associates with the Hard Problem:

    The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the
    felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion; and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

    If you want to know what people mean by a term, it makes sense to read what they write about it.

  5. KN:

    But I don’t see what that has to do with what you said earlier, about the conceptual problem of consciousness.

    Neil:

    You possibly assumed that I was suggesting a problem with the concept of consciousness. While there are problems with that, my comment about a conceptual problems was about other concepts, such as: fact, object, information.

    Good grief, Neil. KN’s question was explicitly about consciousness, in a thread about consciousness:

    KN:

    Why isn’t explaining consciousness a scientific problem?

    Neil:

    Because it isn’t a problem due to a shortage of evidence. It’s a conceptual problem.

    KN:

    What’s the conceptual problem, as you see it?

    How would you answer the actual question?

  6. keiths: The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations:
    the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion; and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

    If this is typical of Chalmers, then it illustrates the incoherence of trying to analyse human thought processes by thinking about them. Doomed to failure. As I keep saying no thinking entity can comprehend another entity as complex as itself.

  7. keiths: How would you answer the actual question?

    I love how keiths gets to ask all the questions and give none of the answers! 🙂

  8. petrushka: I’m more nearly a behaviorist.

    There are behaviourists and behaviourists. Skinner studied pigeons without seemingly any insight into, for instance, Kahneman’s “fast and slow thinking” and how the divided brain appears to show a visual handedness, one eye and hemisphere on the routine and the other on the lookout for danger. Skinner might have done better looking at corvids.

  9. keiths: This is true only if you construe “facts” as propositions or something similar. In that case, then of course someone has to create them.

    Not according to Frege and a lot of other philosophers.

  10. keiths: Your statement is a perfect match for functionalism. You’re saying that consciousness is independent of the substrate and independent of the implementation. All that matters is the function, as expressed in behavior:

    If and when robots or computers do what humans do, they will be conscious.

    Well, he’s a particular kind of functionalist then. He’s not talking about organizing a bunch of beer bottles to do the same thing a brain does.

  11. Alan Fox: I’d still like to have some idea what people generally mean when they talk about consciousness.

    You’ve perhaps noticed the difference between deep sleep and being awake. Or between being anaesthetized and being awake.

    It’s a noticeable difference.

    Glen Davidson

  12. Alan Fox: I love how keiths gets to ask all the questions and give none of the answers!

    If he gave you the answers you would never learn for yourself .

  13. keiths:
    newton:

    Who cares?Your criterion was “self-reference”, not “knowledge of self-reference”.

    I’m just pointing out the implications.

    Actually it was the effect of self reference

  14. GlenDavidson: You’ve perhaps noticed the difference between deep sleep and being awake.Or between being anaesthetized and being awake.

    It’s a noticeable difference.

    Glen Davidson

    Yes. I already pointed out the medical definition is fine. Glasgow.

  15. Hi GlenDavidson,

    You’ve perhaps noticed the difference between deep sleep and being awake.

    Yep. That’s intransitive creature consciousness, which is but one of the many varieties of consciousness distinguished by philosophers. From pages 77 to 79 of my thesis:

    Contemporary philosophers distinguish several different senses of “consciousness”… Block (1995) defined phenomenally conscious states as states with a subjective feeling or phenomenology, which we cannot define but we can immediately recognise in ourselves, distinguishing them from access conscious states, or mental representations which are poised for free use as a premise in reasoning, and for the direct rational control of action and speech. (Block (2001, 2005) has since amended his definition: the key feature of access consciousness is now said to be the fact that the information it contains is made widely available (or “broadcast”) in a global workspace to the brain’s “consumer” systems.) Another, higher-level kind of consciousness is reflexive consciousness, or an individual’s capacity for second-order representations of its mental states….

    Phenomenal consciousness, access consciousness and reflexive consciousness are all varieties of state consciousness, which is defined as consciousness as applied to mental states and processes, as opposed to creature consciousness, or consciousness as applied to a living organism (Rosenthal, 1986). The latter may be subdivided into intransitive creature consciousness – i.e. being awake, as opposed to asleep or comatose, and having at least some sensory systems which are receptive in the way normal for a waking state (Rosenthal, 1993, p. 355) – and transitive creature consciousness – i.e. the ability to perceive and respond to objects, events, properties or facts, thereby making one conscious of them. What distinguishes the latter is that it is inherently relational: “[w]hen a creature senses something or thinks about some object, we say that the creature is conscious of that thing” (Rosenthal, 1993, p. 355). Whereas a creature can be both intransitively conscious and transitively conscious of something, mental states, as such, are not conscious of anything; thus a mental state can only be intransitively conscious.

    “What about neuroscientists?” you may be wondering. What varieties of consciousness do they distinguish? From pages 90-92 of my thesis:

    Neuroscientists commonly distinguish between primary and higher-order forms of consciousness (Edelman, 1989). Both forms appear to qualify as phenomenal in the philosophical sense. Higher-order consciousness “includes awareness of one’s self as an entity that exists separately from other entities” (Rose, 2002a, p. 6), while neurologists use the term primary consciousness (also called “core consciousness” or “feeling consciousness”) to refer to “the moment-to-moment awareness of sensory experiences and some internal states, such as emotions” (Rose, 2002a, p. 6). The latter definition could easily be interpreted as synonymous with a lower grade of phenomenal consciousness, with one caveat: the word “of” in the definition appears to imply the claim that animals need to be conscious of their experiences, in order to qualify as being conscious at all…. We might do better to re-define primary consciousness as “the moment-to-moment awareness that characterizes sensory experiences and some internal states, such as emotions”. Rose (2002a) adds that “[m]ost discussions about the possible existence of conscious awareness in non-human animals have been concerned with primary consciousness” (2002a, p. 6).

    The majority of neurologists consider primary consciousness to be the most basic form of subjective awareness. However, a few authors such as Panksepp (1998, 2001, 2003f) and Liotti and Panksepp (2003) have proposed that we possess two distinct kinds of consciousness: (i) cognitive consciousness, which includes perceptions, thoughts and higher-level thoughts about thoughts and requires a neocortex (a six-layered structure in the brain which comprises the bulk of the brain’s outer shell or cerebral cortex – the neurological consensus (Nieuwenhuys, 1998; Rose, 2002a, p. 6) is that only mammals possess this laminated structure in its developed form), and (ii) affective consciousness which relates to our feelings and arises within the brain’s limbic system, with the anterior cingulate cortex playing a pivotal role. Panksepp considers affective consciousness to be the more primitive form of consciousness… In any case, both cognitive and affective consciousness fall under the definition of primary consciousness proposed above: “the moment-to-moment awareness that characterises sensory experiences and some internal states, such as emotions”.

    So there you go. Defining consciousness is not as easy as it looks. I should mention, by the way, that my thesis was written in 2007, so as you’d expect, there have been further scientific developments since then.

  16. keiths: But “fact” can also refer to a state of affairs — for example, that stars exist — in which case it is independent of us.

    States of affairs are human artifacts.

  17. keiths: Good grief, Neil. KN’s question was explicitly about consciousness, in a thread about consciousness

    Good grief yourself. You seem to take consciousness to be a free-standing concept, completely independent of all other concepts.

  18. Neil:

    Good grief yourself. You seem to take consciousness to be a free-standing concept, completely independent of all other concepts.

    Um, no.

    keiths:

    But “fact” can also refer to a state of affairs — for example, that stars exist — in which case it is independent of us.

    Neil:

    States of affairs are human artifacts.

    Um, no. That there are stars would be true even if there were no humans to point it out.

    keiths:

    Same question as for Alan: Why not read about it?

    Neil:

    Unlike you, I am not a dictionary literalist.

    Reading about a topic makes someone a “dictionary literalist”?

    Um, no.

  19. Neil,

    You said that there is a “conceptual problem” with consciousness. KN asked you to identify it. How do you answer KN’s question?

  20. newton:

    Actually it was the effect of self reference

    “Self-reference” was the criterion. Consciousness was the purported effect.

    Do you believe that self-driving cars are conscious? They model themselves and their situations, after all.

  21. walto,

    Well, he’s a particular kind of functionalist then.

    Yes. The functionalist kind.

  22. Alan,

    If this is typical of Chalmers, then it illustrates the incoherence of trying to analyse human thought processes by thinking about them. Doomed to failure. As I keep saying no thinking entity can comprehend another entity as complex as itself.

    That’s a goofy argument.

    Chalmers isn’t trying to comprehend an entire person. He’s simply asking how information processing gives rise to subjective experience — phenomenal consciousness.

  23. keiths: He’s simply asking how information processing gives rise to subjective experience — phenomenal consciousness.

    It’s a dumb question for the reasons stated. Qualia are an imaginary concept and consciousness is not (unqualified) a useful concept.

  24. Alan,

    It’s a dumb question for the reasons stated.

    No, and I just explained to you why your argument is silly.

    Meanwhile, it’s amusing that you’re inadvertently accusing yourself of incoherence:

    If this is typical of Chalmers, then it illustrates the incoherence of trying to analyse human thought processes by thinking about them. Doomed to failure.

    So by your own standard, your discussion of awareness is “incoherent” and “doomed to failure”.

  25. keiths: That there are stars would be true even if there were no humans to point it out.

    That this counts as a state of affairs, depends on how humans delineate what they consider to be states of affairs.

  26. keiths: You said that there is a “conceptual problem” with consciousness.

    I wrote: “Because it isn’t a problem due to a shortage of evidence. It’s a conceptual problem.”

    The words “with consciousness” were not there in what I wrote. That they were not there was intentional.

  27. The evidence is right in front of you, Neil:

    KN:

    Why isn’t explaining consciousness a scientific problem?

    Neil:

    Because it isn’t a problem due to a shortage of evidence. It’s a conceptual problem.

  28. keiths: your discussion of awareness is “incoherent” and “doomed to failure”.

    The attempt to understand human consciousness by thinking about it rather than working collectively by experiment and observation, yes.

  29. keiths:

    That there are stars would be true even if there were no humans to point it out.

    Neil:

    That this counts as a state of affairs, depends on how humans delineate what they consider to be states of affairs.

    A state of affairs is just the way things are at a particular time. Stars existed long before humans arose. That state of affairs depended in no way on the availability of humans to “delineate” it.

  30. Alan,

    The attempt to understand human consciousness by thinking about it rather than working collectively by experiment and observation, yes.

    As if Chalmers were arguing against observation and experimentation.

    Come on, Alan.

  31. keiths:
    walto,

    Yes.The functionalist kind.

    Nice cutting job. I wonder if you might pick something up in film editing–maybe for toddler movies.

  32. keiths:

    Humans didn’t bring stars into existence, Neil. They existed long before us.

    Neil:

    That is not actually relevant.

    Sure it is. It shows that your statement is wrong:

    There is no “way things are”. There is only the way that we say things are.

  33. walto,

    Nice cutting job.

    As if your comment about beer bottles were relevant to the question of whether petrushka is a functionalist.

  34. I read the description of the hard problem and still don’t see what the problem is. I admit that we don’t have an explanation, but I don’t see that as any special kind of problem.

  35. Neil Rickert to keiths: There is no “way things are”.There is only the way that we say things are.

    In his book ‘Saving the Appearances’, Barfield coins various terms relating to how we interact with the world.The following are rough descriptions of his terms. ‘Participation’ is how we deal with the world from within, ‘collective representations’ relates to our understanding of the phenomena we perceive, the ‘unrepresented’ are the things-in-themselves, ‘alpha thinking’ is ordinary everyday thinking and ‘beta thinking’ is philosophical thinking. The machine-like view of the universe which came to fruition in the nineteenth century he called ‘onlooker consciousness’

    From a video based on the book, he says:

    To the extent that the phenomena are experienced as machine they are believed to exist independently of man not to be participated and therefore not to be in the nature of representations. We have seen that all of these beliefs are fellacious… What then had alpha thinking achieved at precisely this point in the history of the West? It had temporarily set up the appearances of the familiar world as things wholly independent of man. It had clothed them with the independence and extrinsicality of the unrepresented itself. But a representation which is collectively mistaken for an ultimate ought not to be called a representation, it is an idol…I shall have succeeded very poorly if I haven’t made one thing plain. It is only necessary to take the first feeble steps towards a renewal of participation, that is the bare acknowledgement in beta thinking that the phenomena are collective representations in order to see that the actual evolution of the earth we know must have been at the same time an evolution of consciousness. For consciousness is correlative to phenomena. Any other picture we may form of evolution amounts to no more than a symbolic way of depicting changes in the unrepresented. Yet curiously enough as already observed his latter kind of evolution, that is changes in the unrepresented is just what is assumed not to have taken place. By treating the phenomena of nature as objects wholly extrinsic to man with an origin and evolution of their own independent of man’s evolution and origin nineteenth century science and nineteenth century speculation succeeded in imprinting on the minds and imaginations of men and women their picture of an evolution of idols. One result of this has been to distort very violently our conception of the evolution of human consciousness, or rather it has caused us virtually to deny such an evolution in the face of what otherwise must have been accepted as unmistakable evidence.

    From a website I have just come across:

    Your mind is not inside of your brain; your brain is inside Mind.

    When I finally realized this point — that mind is a container within which I and the world exist — everything seemed to pop out at me, as though the world had been flat for so many years and in an instant returned to its proper three-dimensional state…

    suggesting that brains generate consciousness is like suggesting that whirlpools generate water…

    This waking world that we all seem to inhabit together, which seems to be the same for everyone, then, is what we can call consensus reality.

    What is here called ‘consensus reality’, Barfield had termed ‘collective representation’s.

    The present day ‘onlooker consciousness’ which imagines I as subject as separate from an outer world of objects is a temporary stage which will be overcome by ‘final participation’.

  36. That’s an interesting article, and I will need some time to read it and understand it, but I do not regard brains as Turing machines. Absolute determinism is posited by physics, but within the realm of the possible, one cannot determine what a brain will do by analyzing its state. Even the early behaviorists spoke of probabilities rather than of predictable outcomes.

    There is a rather crude concept of AI that involves the imitation game. Build a machine that can play chess, or a machine that can pass for a human interlocutor.

    That is not what I would call functionally equivalent, although such machines can be very useful, and within their scope, vastly better than humans.

  37. A common and persistent objection, however, is that no such characterizations can capture the qualitative character, or “qualia”, of experiential states such as perceptions, emotions, and bodily sensations, since they would leave out certain of their essential properties, namely, “what it’s like” (Nagel 1974) to have them.

    This seems to be the central problem addressed by early behaviorists, and their “solution” was to declare it off limits to study. I think that was reasonable at the time, because there was no technology with which to probe such phenomena.

    The solution is still over the horizon, but I think we have a direction to travel. We know enough about brains to emulate bits an pieces. Biology has a several hundred million year head start in building brains, and I do not expect to see anything that I would call AI. It’s equivalent to solving the problem of first life.

    But I’m comfortable in believing that a necessary component of “intelligence” is the ability to evolve behavior, and that behavior evolves both at the biological population level and at the neural connection level. Machines that do not and cannot do this are not candidates for being called AI, and will not evoke questions about whether they “experience” anything. So for the moment, questions about artificial qualia are moot.

  38. Just to note that the behaviorist solution was to say that all mental states are off-limits to empirical explanation, and not just “qualia”. Mental states are usually taken to be representations, or to be about things. Qualia are usually understood as non-representational states of feeling, of sheer awareness.

    Chalmers et al depend on this idea that we can conceive of a distinction between the representational and non-representational character of mental states, so we can conceive of beings that have all of our representational states but none of our qualia. That’s crucial to their intuition that qualia are metaphysically weird and require an explanation that goes beyond what any cognitive science could provide.

    The breakthrough with functionalism was to show that we can talk about mental states as representational states of the system, from a third-person or objective standpoint, on analogy with computational states of computing machines.

    I recently read two fascinating papers that bear on this: The cognitive neuroscience revolution and From symbols to icons: the return of resemblance in the cognitive neuroscience revolution. (Sadly both papers are behind paywalls but I can send PDFs to those interested.)

    The gist is that the early functionalists were interested in computer science as a conceptual framework for cognitive science, which meant that they thought about mental representations as symbols without any regard for implementation. When neuroimaging (first CAT, then MRI and fMRI) improved to the point where neuroscience could be integrated with cognitive science, there was a slow shift from thinking about mental representations as symbols to neural representations as icons. One important consequence of this shift has been in the status of neural representations. Whereas for the functionalists mental representations were mere posits, it has been argued that neural representations have been directly observed (see Neural Representations Observed).

  39. I do not accept the concept of neural representations. What brains do is behave.

    There is no representation of external objects or events in brains. That is not a philosophical position, it is a fact. (Facts can be wrong, I admit, and I can be wrong.)

    But claims about seeing representational neural activity bring to mind claims about lie detectors. There is always someone claiming that they finally have broken the code, and neural activity X corresponds to some objective mental state.

  40. petrushka:

    I do not accept the concept of neural representations. What brains do is behave.

    There is no representation of external objects or events in brains. That is not a philosophical position, it is a fact.

    Suppose I were to abduct you and take you by black helicopter to an undisclosed location run by the CIA, where you were asked to draw the rough floorplan of your house. I suspect you’d be able to do it; most people would.

    Your house is an external object. If it is not represented in your brain, how are you able to draw the floorplan from the inside of a CIA lab, where your house is not in view?

Leave a Reply