A dubious argument for panpsychism

At Aeon, philosopher Philip Goff argues for panpsychism:

Panpsychism is crazy, but it’s also most probably true

It’s a short essay that only takes a couple of minutes to read.

Goff’s argument is pretty weak, in my opinion, and it boils down to an appeal to Occam’s Razor:

I maintain that there is a powerful simplicity argument in favour of panpsychism…

In fact, the only thing we know about the intrinsic nature of matter is that some of it – the stuff in brains – involves experience… The theoretical imperative to form as simple and unified a view as is consistent with the data leads us quite straightforwardly in the direction of panpsychism.

…the brains of organisms are coloured in with experience. How to colour in the rest? The most elegant, simple, sensible option is to colour in the rest of the world with the same pen.

Panpsychism is crazy. But it is also highly likely to be true.

I think Goff is misapplying Occam’s Razor here, but I’ll save my detailed criticisms for the comment thread.

656 thoughts on “A dubious argument for panpsychism

  1. How is brains doing addition relevant to the hard question?

    The potentially answerable question is, what happens if we successfully emulate the actual architecture of brains. Before we get to the philosophical question, we should observe the phenomenon we wish to understand.

  2. keiths: Consider my addition scenario. Before you look at the list, you don’t know what the numbers are. After you look at the list, you do know what the numbers are.

    Not at all relevant to the point.

  3. petrushka: The potentially answerable question is, what happens if we successfully emulate the actual architecture of brains.

    What do you think the answer will be?

    I’d say if/when we emulate the architecture it will tell us nothing at all about whether the emulation is a philosophical zombie or not.

    peace

  4. fifthmonarchyman: I’d say if/when we emulate the architecture it will tell us nothing at all about whether the emulation is a philosophical zombie or not.

    Hard to say. But then we probably won’t ever emulate it.

    It seems possible that computers already have qualia. There’s no reason that the qualia should be apparent from the usual input/output channels. They could internally have qualia, but there’s no way for us to tell.

    And then there’s the possibility that a system could have qualia, but not be conscious of having qualia. Maybe having qualia isn’t really the hallmark of consciousness.

  5. fifthmonarchyman: quote:
    Random : lacking a definite plan, purpose, or pattern.
    end quote:

    As you know I don’t think that actual randomness exists

    Yes, that is why I am wondering why

    “ I think it’s a step forward but needs to take randomness into account to be viable.”

    one would need to take a non actual subjective variable into account to be viable explanation for something.

    So when I speak of randomness most often I mean apparent randomness and by “apparent” I usually mean apparent from my particular perspective as the observer/evaluator.

    Exactly, from your perspective randomness does not exist therefore the actual randomness/ nonrandomness dichotomy does not exist.

    It just seems strange you reject the actual existence of one of your variables that you believe needs to be taken into account to make your intent detector work.

    peace

  6. petrushka,

    How is brains doing addition relevant to the hard question?

    You and Neil are the ones making the bizarre claim — that brains don’t process information — so it’s up to you to tell us how your claim is relevant to the hard problem.

    Instead, you’re avoiding my question:

    Since you agree that people can process information, but deny that brains do so, perhaps you can answer the question that Neil is avoiding:

    And in terms of my [addition] example, you say that the brain isn’t performing the addition, but that you are. What is your brain doing, then? And if your brain isn’t performing the addition, then how is it getting done? Are your two kidneys taking on the job, perhaps with help from your left femur?

  7. keiths:

    Consider my addition scenario. Before you look at the list, you don’t know what the numbers are. After you look at the list, you do know what the numbers are.

    Neil:

    Not at all relevant to the point.

    Completely relevant to the point. The information contained in the list — the identity of the numbers to be added together — doesn’t just teleport into your awareness. It has to get there by physical means.

    The light reflecting off the list carries that information into your visual system. Block the light, and you block the information.

  8. fifthmonarchyman: It seems to me that the conclusion is simply a restating of the premise positively rather than negatively.

    No. I’m sorry but that is wrong. Read it again more carefully.

  9. newton: one would need to take a non actual subjective variable into account to be viable explanation for something.

    1) What is wrong with subjective observations? Subjectivity is what consciousness is all about

    2) Who said anything about explaining something? If you could explain consciousness comprehensively you could duplicate it thus demonstrating that your explanation was incorrect.

    newton: It just seems strange you reject the actual existence of one of your variables that you believe needs to be taken into account to make your intent detector work.

    Why is that strange?

    Real randomness does not exist but apparent randomness is a necessary consequence of our lack of omniscience.

    Separating information from apparent “random” noise is a hallmark of human consciousness. We are scary good at it.

    peace

  10. keiths:
    walto,

    Which is exactly what Comte believed regarding the composition of stars.His verdict was premature, and so is yours.

    No. Comte was a positivist. Like you. That is, he believed that this info would never be available to science, and that, therefore it could not be known. You and I both realize that he was mistaken in his empirical prediction. But not being a positivist I don’t agree that that was a necessary prerequisite for knowledge. I just think it’s generally very important to have.

    Dunno why you can’t make these distinctions. Are they too subtle for you?

  11. fifthmonarchyman: I’d say if/when we emulate the architecture it will tell us nothing at all about whether the emulation is a philosophical zombie or not.

    Intuitively, that seems correct to me. There simply isn’t any empirical information that can answer that. As you’ve said, this is much like the “other minds” question.

  12. keiths: but that you are. What is your brain doing, then? And if your brain isn’t performing the addition, then how is it getting done?

    The brain is a component of a behaving person. I don’t know exactly what the brain is doing when a person computes addition. I suspect if we did know, that AI would be further along, and the hard problem would look different.

    The distinction I am trying to make is that we do know how electronic computers do addition. At least we know how existing designs work. We know how transistors work; we know how to configure arrays of transistors to make logic gates; we know how to connect logic gates to do arithmetic.

    But we do not know how people do arithmetic.

    And worse, as evolutionists, we know that brains evolved for behaviors that have little or nothing to do with counting and adding.

  13. walto: Intuitively, that seems correct to me. There simply isn’t any empirical information that can answer that. As you’ve said, this is much like the “other minds” question.

    Of course its an other minds problem, and that’s why, since the word robot was first used in it’s current sense, science fiction writers have accepted the imitation game as the only game in town. Loosely speaking.

    We judge the consciousness of other entities by how they behave.

  14. walto: There simply isn’t any empirical information that can answer that. As you’ve said, this is much like the “other minds” question.

    Most proponents of “the singularity” assume that once a computer behaves sufficiently like a human we will grant that it is conscious and extend human rights to it.

    They forget that our recent history shows that we are more than willing to deny actual humans human rights if we decide they are a little different from us.

    A silicon based appliance has no chance whatsoever even if they’re named Hal or Data.

    peace

  15. petrushka: science fiction writers have accepted the imitation game as the only game in town. Loosely speaking.

    If it’s the only game in town, most of us would just assume not play.

    peace

  16. walto,

    No. Comte was a positivist. Like you.

    I’m not a positivist.

    That is, he believed that this info would never be available to science…

    Just like you. He believed the question could never be answered empirically. You say the same thing:

    What I’ve said (and believe) is that they are not and will never be empirical questions that can be answered by scientific investigation.

    His verdict was premature, and so is yours.

  17. petrushka: Of course its an other minds problem, and that’s why, since the word robot was first used in it’s current sense, science fiction writers have accepted the imitation game as the only game in town

    It might be worth noting that the word “robot” means “forced laborer”, or to be more blunt, “slave.” It’s a Czech word that was first used in its modern sense by the Czech writer Karol Capek in his play “Rossum’s Universal Robots.” It’s a modern take on the classic master/slave dialectic: the robots rebel and destroy their human masters. This was thirty years before Turing proposed the imitation game as a test of machine intelligence.

  18. keiths:

    Since you agree that people can process information, but deny that brains do so, perhaps you can answer the question that Neil is avoiding:

    And in terms of my [addition] example, you say that the brain isn’t performing the addition, but that you are. What is your brain doing, then? And if your brain isn’t performing the addition, then how is it getting done? Are your two kidneys taking on the job, perhaps with help from your left femur?

    petrushka:

    The brain is a component of a behaving person. I don’t know exactly what the brain is doing when a person computes addition.

    Yet you’ve told us emphatically that the brain does not process information. Addition is information processing. Therefore, according to you, the brain does not perform addition in my scenario.

    Hence my incredulous question:

    And if your brain isn’t performing the addition, then how is it getting done? Are your two kidneys taking on the job, perhaps with help from your left femur?

    petrushka:

    But we do not know how people do arithmetic.

    So? We know that they do arithmetic, and doing arithmetic is a form of information processing. You’ve acknowledged that humans process information, but you’re denying that brains do. Where is the information processing taking place, then?

  19. Where is the wrong queation. How is a better question.

    When I was taking algebra, second year and higher, the processing took place on paper, and the answer was recorded on the paper before I became aware of it. Something like this happens with abacus users. The fingers do the calculation and the person reads the answer.

    As I say, the problem with calling this information processing is that it is not a heplful metaphor. Designing machines that do logical operations faster does not lead in the direction of artifical intelligence. Brains can do logic, but the architecture is not anything like a locic processor.

  20. Kantian Naturalist: It might be worth noting that the word “robot” means “forced laborer”, or to be more blunt, “slave.”

    I think the “forced” part is what is key here. Robots seem to be captive to their programing they simply have no choice do to anything else.

    Humans on the other hand at least like to think that we have the ability to do otherwise than we do.

    It’s that perceived freedom that sets persons apart from animatrons and robots. With all due respect to Turing behavior is really beside the point

    peace

  21. Perhaps a different metaphor. Information processing done by humans is done by a virtual machine running on a CPU that is not, at the lowest level of architecture, logic driven.

    This distinction becomes important if you wish to build or discuss a simulacrum. Or wish to discuss what is meant by consciousness or personal experience.

  22. petrushka: Information processing done by humans is done by a virtual machine running on a CPU that is not, at the lowest level of architecture, logic driven.

    It’s important to note that “not logic driven” is not the same thing as arbitrary or random.

    peace.

  23. petrushka,

    Where is the wrong queation.

    No, “where” is the right question.

    You told us that humans process information. You told us that brains don’t process information. The assertions are yours, and it’s up to you to support them. (This also applies to Neil, of course.)

    To add up a series of numbers “in one’s head” is to process information. According to you, the addition does not take place in the brain. What justification can you offer for your confident assertion about the location of the information processing? If the brain doesn’t carry out the addition, then where does it take place? The bladder?

    Your repeated evasions indicate that you don’t have a good answer. That’s not surprising, because your claim — that brains don’t process information — is clearly false. Why not just acknowledge that instead of continuing the evasion game?

    You had a false belief about brains, and someone corrected your mistake. That’s a good thing, not something to be hidden.

    When I was taking algebra, second year and higher, the processing took place on paper, and the answer was recorded on the paper before I became aware of it.

    Actually, no. Your brain was intimately involved. An application of chloroform would have prevented any algebra from getting done. The pencil and paper on their own were incapable of performing the processing.

    In any case, I anticipated that objection. Remember, I specified in my scenario that you read the numbers off the list, add them in your head, and only then write down the result.

    The addition is obviously taking place in your brain, not in your liver or your gluteus maximus.

  24. I’m sure your geometric logic has defeated me, aside from the fact that you haven’t bothered to follow anything I’ve said. Nothing that is important.

  25. petrushka,

    Of course I’ve followed what you’ve said. That’s how I became aware of your odd claims regarding people, brains, and information processing.

    Now, perhaps you can follow what I just said (and asked):

    You told us that humans process information. You told us that brains don’t process information. The assertions are yours, and it’s up to you to support them. (This also applies to Neil, of course.)

    To add up a series of numbers “in one’s head” is to process information. According to you, the addition does not take place in the brain. What justification can you offer for your confident assertion about the location of the information processing? If the brain doesn’t carry out the addition, then where does it take place? The bladder?

  26. petrushka: How is brains doing addition relevant to the hard question?

    As Edward Feser is known to have pointed out (via Saul Kripke), people do actual addition, but computers or calculators don’t. Computers only do “quaddition” or quasi-addition, a pre-programmed operation. When the operation hits the limits of the memory or of the program, the computer simply errors out.

    The computer doesn’t even understand the concept of addition, but people understand it. When the human hits the limits of his operative memory, he can use symbols for help, and when the operation hits the limits of one kind of particular symbols (say sticks or Roman numerals), he can use other kind of symbols (say Arabic numerals), and so on to a potential infinite – because humans understand the concept of addition, but computers don’t.

  27. Kantian Naturalist: The conceptual problem is still serious, but the upshot of Kripke and Chalmers for philosophy of cognitive science has been minimal.

    This tells something about cognitive science – there’s probably nothing much cognitive going on there. Similarly, the impact of cognitive science on linguistics has been minimal. Linguists half a century ago sincerely tried to gang up with behaviorists, cognitivists and evolutionists, but it turned out completely unproductive.

  28. In “Physics of the World-Soul: The Relevance of Alfred North Whitehead’s Philosophy of Organism to Contemporary Scientific Cosmology” Matthew Segall writes:

    Almost a century ago, Whitehead warned that if physicists did not begin to reassess the outdated imaginative background of mechanistic materialism in light of their own most recent cosmological discoveries, the scientific enterprise would as a result “degenerate into a medley of ad hoc hypotheses.” Despite the conceptual revolutions of the 19th and 20th century (e.g., evolutionary, relativity, quantum, and complexity theories), scientific materialism remains the de facto natural philosophy of Western civilization.

    Whitehead: It imagines the universe as irreducible brute matter … spread throughout space in a flux of configurations … in itself … senseless, valueless, purposeless … following a fixed routine imposed by external relations.

    Such a picture of ultimate reality leaves no room for life or consciousness. It seems likely that this metaphysical oversight is among the reasons for (post)modern civilization’s ecological and socioeconomic crises. A coherent philosophy of nature has yet to take root among civilization’s intelligentsia.

    … average lawabiding citizens must go about their day under the assumption that they bear some responsibility for their actions, despite the fact that materialistic interpretations of neuroscience leave no room in the brain for anything remotely resembling consciousness, much less free will. Scientific materialism leaves us in the impossible position of having to affirm in theory what we are unable to deny in practice.

    Seagall has an interesting discussion on “Consciousness, Technology, and the Singularity”, from a panpsychist position here

    IMO a paradigm shift is overdue.

  29. Verbal behavior, which includes adding up numbers in your head, is a recently invented behavior. A billion years of evolution steered brains optimized for other things. The underlying architecture is optimized for parallel processing of situational responses to need and threat.

    Until we began inventing autonomous robots ( cars, drones, etc), AI was pretty much locked into imitating human computers, and was focused on solving logical and arithmetic problems. Due to cultural history. Most people think of reason as the highest as best kind of brain function. We judge the intelligence of other animals by their ability to count and to solve problems resembling IQ tests.

    But all that is recent invention in the universe of brains. It is a kludge added on to an architecture evolved for immediate response.

    When your objective is to understand qualia, it would be best to understand the mosquito brain. Perhaps now that autonomous behavior is attracting big money, the architecture of computers will evolve to support it.

  30. petrushka: Before we get to the philosophical question, we should observe the phenomenon we wish to understand.

    That seems a sensible suggestion to avoid wasted effort!

  31. petrushka,

    Verbal behavior, which includes adding up numbers in your head, is a recently invented behavior.

    Of course. I picked addition for my scenario not because it is ancient, but because it is something that even you and Neil acknowledge as an example of information processing.

    Brains do process information, and adding up numbers in your head is just one of the many forms of information processing that take place there.

  32. petrushka: Verbal behavior, which includes adding up numbers in your head, is a recently invented behavior.

    And it seems to be lateralized in the human brain. McGilchrist perhaps overstates his case but I find the idea that our awareness of our awareness is limited by this lateralization fascinating.

  33. petrushka:

    Before we get to the philosophical question, we should observe the phenomenon we wish to understand.

    Alan:

    That seems a sensible suggestion to avoid wasted effort!

    As if no one had ever bothered to observe their conscious experiences, and as if such observation weren’t involved in the formulation of the hard problem.

  34. CharlieM, quoting Matthew Segall:

    Such a picture of ultimate reality leaves no room for life or consciousness.

    I would love to see Segall’s demonstration that physicalism precludes life and consciousness.

  35. Suppose for a moment that we wish to construct a Turing Test device to detect and report the color green. The test will include all the kinds of stimuli that cause humans to report the color green.

    Among the stimuli are pure lights emitting in a certain range of wavelengths.

    Additive lights emitting in wavelengths outside the “green” portion of the spectrum.

    Subtractive sources.

    Flickering light sources that add or subtract colors.

    Benham tops that induce color sensations by alternating black and white stripes.

    Afterimages.

    Illusions created by context and motion.

    How do you deal with these qualia in a data processing model? How do you program your imitation game? Can you reliably program, using the data processing model, a device that will “pass” the test when confronted with some previously unnoticed kind of stimulus or illusion?

    Qualia are a feature of a particular kind of brain architecture. The problem with the “data processing” model is not that it is wrong, but that it is misleading.

    If “data processing” is all inclusive and incorporates all possible kinds of reflexive and reactive behavior, then it is like ID. It explains everything and explains nothing.

    When you ask how a material system embodies qualia, you are asking a question about the architecture of the behaving system.

    I suspect that if we develop an architecture that enables really efficient autonomous behavior, it will experience qualia. It will not have to compute green; it will experience green. So to speak.

    And it will be susceptible to illusions.

  36. petrushka: Among the stimuli are pure lights emitting in a certain range of wavelengths.

    There are no “pure lights” in this universe and no known official standard as to which range of wavelengths are to be considered “green”

    peace

  37. petrushka,

    Perhaps a different metaphor. Information processing done by humans is done by a virtual machine running on a CPU that is not, at the lowest level of architecture, logic driven.

    Setting aside the awkwardness of the metaphor, let me point out that the lowest level of every actual information processing system is physics, not logic. (Dualists may disagree, but I’m not addressing them here.)

  38. petrushka,

    You keep mentioning Turing tests, but as I’ve already pointed out:

    Such a test would merely establish whether the system reports qualia, not whether it experiences them — unless the ability to report them is invariably accompanied by phenomenal consciousness. But what justifies the latter assumption?

    My earlier challenge still applies:

    How will we know that something that passes a level II Turing Test is actually experiencing the qualia it attributes to itself?

  39. keiths:
    petrushka,
    Setting aside the awkwardness of the metaphor, let me point out that the lowest level of every actual information processing system is physics, not logic.(Dualists may disagree, but I’m not addressing them here.)

    I am not arguing against that. I am arguing against the concept that our current crop of computers are relevant to the question of qualia, or that our current architectures can emulate brains. I am fairly secure in this opinion because I chat with people who work at the DARPA level of IT, and they say the physical architecture problem is unsolved. AI is a dream waiting for someone to figure out how to implement it.

  40. keiths: As if no one had ever bothered to observe their conscious experiences…

    Hmm! If you mean “As if no one had ever bothered to observe their own conscious experiences…” then you miss the point I was making. Half the human brain appears invisible to self-reflection. Not a question of “bothering”, more a question of access.

  41. petrushka,

    The question is whether brains process information, not whether “our current crop of computers” experience qualia.

    Surely by now you can see that brains do in fact process information. It’s time for you and Neil to let go of the silly idea that they don’t.

  42. keiths:
    CharlieM, quoting Matthew Segall:

    I would love to see Segall’s demonstration that physicalism precludes life and consciousness.

    He doesn’t say that life and consciousness is precluded by existence. He is saying that according to materialism ultimate reality does not depend on these features and they are just emergent aspects which may come and go in the blink of an eye metaphorically speaking, In other words they are relative, not ultimate features.

    Is your understanding about life and consciousness different from this materialistic view?.

  43. Another way to look at the problem as I see it is to think of qualia as analogs, rather than as the product of sequential computing. The fact that neurons “fire” synapses leads us down a garden path. Everyone sees brains as wet digital computers. But the dominant mode is analog.

    Stimuli in, response out. That is the earliest mode, and everything evolved since the earliest tropisms supports immediate or quick action. Pondering and computing mean death.

    If you think of qualia as analogs rather than the product of computation, some of the mystery fades. How does a stimulus evoke the color blue? By shoving the dial. Not philosophically different from water eroding sand and rock.

    The digital aspect of neurons slows down the response, but the “computation” is parallel rather than sequential. There are layers and layers of passing the baton. Animals survive predation not because the system is zippy fast, but because predators have the same constraints.

  44. keiths: Surely by now you can see that brains do in fact process information. It’s time for you and Neil to let go of the silly idea that they don’t.

    You seem intent on scoring points rather than understanding an orthogonal idea, so the conversation is unproductive.

  45. Alan,

    Hmm! If you mean “As if no one had ever bothered to observe their own conscious experiences…” then you miss the point I was making. Half the human brain appears invisible to self-reflection. Not a question of “bothering”, more a question of access.

    That’s confused. We’re talking about the observation of conscious experience, not of activities taking place outside of consciousness.

  46. keiths:

    Surely by now you can see that brains do in fact process information. It’s time for you and Neil to let go of the silly idea that they don’t.

    petrushka:

    You seem intent on scoring points rather than understanding an orthogonal idea, so the conversation is unproductive.

    The problem is that you and Neil are clinging to an idea that has been clearly shown to be incorrect. Set your ego aside and come to grips with what scientists long ago figured out: brains process information.

  47. petrushka,

    If you think of qualia as analogs rather than the product of computation, some of the mystery fades. How does a stimulus evoke the color blue? By shoving the dial.

    That doesn’t help. The hard problem in no way depends on seeing information processing as purely digital.

    The question is how physical information processing, whether analog, digital, or a combination of both, gives rise to first-person phenomenal experience.

  48. keiths: We’re talking about the observation of conscious experience, not of activities taking place outside of consciousness.

    Apart from the obvious question of who is “We”, other participants don’t seem to be communicating with you at all successfully.

    So Keiths is talking about “consciousness” from a first-person point-of-view, notwithstanding his inability to verbalize about the non-verbal half of his brain?

    Are you suggesting that the non-verbal hemisphere is not “conscious”?

  49. keiths:

    I would love to see Segall’s demonstration that physicalism precludes life and consciousness.

    CharlieM:

    He doesn’t say that life and consciousness is precluded by existence.

    Not by existence. By physicalism, which he refers to as “scientific materialism”:

    Despite the conceptual revolutions of the 19th and 20th century (e.g., evolutionary, relativity, quantum, and complexity theories), scientific materialism remains the de facto natural philosophy of Western civilization.

    Whitehead: It imagines the universe as irreducible brute matter … spread throughout space in a flux of configurations … in itself … senseless, valueless, purposeless … following a fixed routine imposed by external relations.

    Such a picture of ultimate reality leaves no room for life or consciousness.

    He’s wrong about that, of course. Hence my statement:

    I would love to see Segall’s demonstration that physicalism precludes life and consciousness.

Leave a Reply