A dubious argument for panpsychism

At Aeon, philosopher Philip Goff argues for panpsychism:

Panpsychism is crazy, but it’s also most probably true

It’s a short essay that only takes a couple of minutes to read.

Goff’s argument is pretty weak, in my opinion, and it boils down to an appeal to Occam’s Razor:

I maintain that there is a powerful simplicity argument in favour of panpsychism…

In fact, the only thing we know about the intrinsic nature of matter is that some of it – the stuff in brains – involves experience… The theoretical imperative to form as simple and unified a view as is consistent with the data leads us quite straightforwardly in the direction of panpsychism.

…the brains of organisms are coloured in with experience. How to colour in the rest? The most elegant, simple, sensible option is to colour in the rest of the world with the same pen.

Panpsychism is crazy. But it is also highly likely to be true.

I think Goff is misapplying Occam’s Razor here, but I’ll save my detailed criticisms for the comment thread.

656 thoughts on “A dubious argument for panpsychism

  1. keiths:
    petrushka,
    In other words, you’re a functionalist.I lean that way myself, but it doesn’t solve the hard problem.

    I have no use for ists and isms. I find your hard problem to be equivalent to free will vs determinism. Give me an operational definition with clearly stated entailments, or don’t waste my time.

  2. keiths:
    keiths:

    walto:

    No, you haven’t.Epistemology is philosophy, whether you like it or not, and claims about what we will never know, like yours and Comte’s, are epistemological claims.

    It doesn’t matter that epistemology is philosophy: nobody has denied that and it’s entirely irrelevant.

    However, I see that I “lied” in my previous post. It was three times, not twice. So perhaps it should be guanoed.

  3. walto,

    It doesn’t matter that epistemology is philosophy: nobody has denied that and it’s entirely irrelevant.

    It’s obviously relevant, because Comte’s claim is epistemological, and so is yours.

  4. petruska,

    Give me an operational definition with clearly stated entailments, or don’t waste my time.

    In other words, “I don’t like the hard problem. Make it go away! Define it out of existence!”

  5. Come on, walto.

    Comte’s claim is about what we cannot ever know. So is yours. Those are epistemological claims.

  6. Kantian Naturalist: Neil, you seem to be assuming that the idea of cognition as information processing relies on the idea that the information is already out there, lying around, waiting to be picked up by brains and processed.

    That’s what AI people seem committed to. That’s what Dennett seems committed to.

    I suppose I’m more inclined to see brains as mechanisms for converting Shannon information into semantic information.

    That’s a reasonable description of what is happening when I am listening to speech. But it does not describe what is happening when I am bird-watching. If I am bird-watching, then there isn’t any Shannon information involved, other than what my own brain may be producing.

  7. keiths,

    That’s absurd. And yes, I know that’s Dennett’s simplistic definition.

    Shannon’s theory was explicitly a theory of communication. And communication requires a sender of that information.

  8. KN:

    I suppose I’m more inclined to see brains as mechanisms for converting Shannon information into semantic information.

    Neil:

    That’s a reasonable description of what is happening when I am listening to speech.

    Which contradicts your claim that information doesn’t flow into our brains via our senses.

    But it does not describe what is happening when I am bird-watching. If I am bird-watching, then there isn’t any Shannon information involved, other than what my own brain may be producing.

    There’s plenty of Shannon information involved. The hallmark of Shannon information is that it reduces uncertainty in the recipient.

    While wearing a light-tight mask and good earplugs, Nellie is unable to determine what birds (if any) are in her back yard. When she removes the mask and earplugs, she can tell you exactly what birds are in her back yard, and how many of each. Her uncertainty is reduced, because the sensory input carries Shannon information.

  9. Neil:

    Shannon’s theory was explicitly a theory of communication. And communication requires a sender of that information.

    That’s as silly as arguing that entropy doesn’t apply to anything but gases, since gas physics is where the concept was originally applied.

  10. Neil,

    Your answer to this?

    Your brain is doing a lot of work when you visualize my [bowling ball/wooden ramp] scenario, yet you claim that it isn’t processing information. Well, if it isn’t processing information, then what is it doing, according to you?

  11. keiths:
    Come on, walto.

    Comte’s claim is about what we cannot ever know.So is yours.Those are epistemological claims.

    Again, that’s just wrong. If somebody says we’ll never know what it looks liketo see Neptune up close, that’s a technological prediction, not an epistemological thesis. No one who can’t tell that difference should try to do philosophy.

  12. walto,

    The notion that science and technology can’t impinge on philosophical questions is ridiculous.

    Second, Comte’s statement wasn’t merely a technological prediction. He declared that the composition of stars was unknowable:

    While we can conceive of the possibility of determining their shapes, their sizes, and their motions, we shall never be able by any means to study their chemical composition or their mineralogical structure …

    [emphasis added]

    What is knowable, what isn’t, and why? Those questions are all within the purview of epistemology.

  13. keiths: While we can conceive of the possibility of determining their shapes, their sizes, and their motions, we shall never be able by any means to study their chemical composition or their mineralogical structure …

    Just a mistaken empirical claim. Nothing more. And obviously so, too. How many times do you need to be told this before it sinks in?

  14. A claim that something is unknowable by any means. That’s obviously an epistemological claim, walto.

  15. No. What other means than empirical ones might there be to uncover empirical knowledge: Noetic ray? Again, this is obvious, if you’d just think about it for a minute instead of repeating yourself.

  16. keiths: In other words, “I don’t like the hard problem. Make it go away! Define it out of existence!”

    A serious advocate of the seriousness of the Hard Problem would probably accuse Dennett of doing the same thing, e.g. in his “The Unimagined Preposterousness of Zombies“.

    Note: if your hard problem of consciousness persists for four hours or more, please consult a metaphysician.

  17. keiths:
    A claim that something is unknowable by any means.That’s obviously an epistemological claim, walto.

    I don’t think that’s true. It’s a claim about what sorts of things we can know, not a claim about what knowledge is.

    If I were to say, “we’ll never know what dinosaurs really looked like, because we’ll never be able to travel backwards in time,” I’m not making an epistemological claim, because I’m not making a claim about what knowledge is, or how it is possible. I’m making a claim about what sorts of facts we’ll never able to know, because of the laws of physics (as we currently understand them).

    In other words, it’s (roughly) a difference between first-order claims (claims about the world) and second-order claims (claims about those claims).

  18. Kantian Naturalist: If I were to say, “we’ll never know what dinosaurs really looked like, because we’ll never be able to travel backwards in time,” I’m not making an epistemological claim, because I’m not making a claim about what knowledge is, or how it is possible. I’m making a claim about what sorts of facts we’ll never able to know, because of the laws of physics (as we currently understand them).

    Right.

    Even something like “There’s no way I’ll never know Russell personally” isn’t an epistemological claim (even if it has some metaphysical implications about “the after-life”).

  19. keiths: In other words, “I don’t like the hard problem. Make it go away! Define it out of existence!”

    That’s not what I said.

    What I said was, define it in a way that it can be answered. Simply being able to type a question does not make it a useful or coherent question.

  20. keiths:

    A claim that something is unknowable by any means.That’s obviously an epistemological claim, walto.

    KN:

    I don’t think that’s true. It’s a claim about what sorts of things we can know, not a claim about what knowledge is.

    If I were to say, “we’ll never know what dinosaurs really looked like, because we’ll never be able to travel backwards in time,” I’m not making an epistemological claim, because I’m not making a claim about what knowledge is, or how it is possible.

    The question of what is knowable, and what isn’t, is very much a part of epistemology.

    The opening paragraphs of the Internet Encyclopedia of Philosophy article on epistemology:

    Epistemology is the study of knowledge. Epistemologists concern themselves with a number of tasks, which we might sort into two categories.

    First, we must determine the nature of knowledge; that is, what does it mean to say that someone knows, or fails to know, something? This is a matter of understanding what knowledge is, and how to distinguish between cases in which someone knows something and cases in which someone does not know something. While there is some general agreement about some aspects of this issue, we shall see that this question is much more difficult than one might imagine.

    Second, we must determine the extent of human knowledge; that is, how much do we, or can we, know? How can we use our reason, our senses, the testimony of others, and other resources to acquire knowledge? Are there limits to what we can know? For instance, are some things unknowable?

    [emphasis added]

  21. petrushka,

    What I said was, define it in a way that it can be answered.

    The problem is that you’re defining it out of existence. You’re proposing an operational definition that says “If the system convincingly reports having first-person phenomenal consciousness, then it has first-person phenomenal consciousness.”

    But that’s the very question we’re trying to answer: Can a system report phenomenal consciousness without actually having phenomenal consciousness? And if not, then why, exactly? Why must the kind of information processing capable of producing such first-person reports invariably be accompanied by actual phenomenal consciousness?

  22. keiths: But that’s the very question we’re trying to answer: Can a system report phenomenal consciousness without actually having phenomenal consciousness?

    You are calling for the answer to a question that cannot possibly be answered.

    If you disagree, tell me how it could be answered. Show your hypothetical methodology

  23. keiths:

    But that’s the very question we’re trying to answer: Can a system report phenomenal consciousness without actually having phenomenal consciousness? And if not, then why, exactly? Why must the kind of information processing capable of producing such first-person reports invariably be accompanied by actual phenomenal consciousness?

    petrushka:

    You are calling for the answer to a question that cannot possibly be answered.

    How do you know that it “cannot possibly be answered”? You — like walto — are repeating Comte’s error.

    If you disagree, tell me how it could be answered.

    Comte’s experience is instructive. He confidently announced that the composition of stars would never be known, by any means. Within a couple of decades, science had advanced to the point where the question could be answered.

    Consciousness is a hot area of theorizing and research. What basis do you have for saying that no breakthrough will ever occur by which we can answer the question?

  24. The hard problem cannot be answered as you state it. I suspect it will be satisfactorily solved when it can be stated operationally.

  25. Vincent,

    From your UD post on zombies:

    When we consider the causal powers of the three kinds of zombies, we can immediately see that a human being would possess certain powers that each of them lacked. A qualia zombie would lack the power to describe what it was feeling, because it wouldn’t have any feelings.

    That’s technically correct, but very misleading. Such a zombie would, by definition, behave indistinguishably from a non-zombie. That means it would still report sadness on going through a divorce, fear upon walking down a dark street in a dangerous neighborhood, elation upon receiving a promotion. It’s just that those reports would be false. The feelings being reported would be simulated, not real.

  26. keiths: The feelings being reported would be simulated, not real.

    That is possible in the imitation game, which originally was envisioned as taking place between teletype terminals. The operational question was, can we distinguish between the verbal utterances of a human, and those of a sophisticated computer program.

    But there’s something fishy going on with the argument when the game changes from looking at the output, to looking at the producing system. Does anyone not see the difference between looking at the outward behavior of a simulation, and looking at the molecule for molecule identity of the original and copy?

  27. petrushka,

    But there’s something fishy going on with the argument when the game changes from looking at the output, to looking at the producing system. Does anyone not see the difference between looking at the outward behavior of a simulation, and looking at the molecule for molecule identity of the original and copy?

    The “fishiness” is deliberate. The thought experiment highlights the conflict between two competing intuitions:

    1) The intuition that a physically identical copy would be conscious in the same way as the original; and

    2) The intuition that information processing can always take place “in the dark”, without an accompanying phenomenal consciousness.

    You and I affirm (1) over (2), but that leaves us with the hard problem: Why are some kinds of information processing accompanied by subjective experience if others are not?

  28. keiths: From your UD post on zombies:

    When we consider the causal powers of the three kinds of zombies, we can immediately see that a human being would possess certain powers that each of them lacked. A qualia zombie would lack the power to describe what it was feeling, because it wouldn’t have any feelings.

    That’s like saying that an actor would lack the power to describe the feelings that the actor is portraying because the actor doesn’t have those feelings. But that’s the point of acting.

    Likewise, a computer can produce the words that feeling beings do as well in order to simulate the sentient being. That’s why merely acting as if you have the emotions or qualia means little to nothing–even humans often fake it.

    Glen Davidson

  29. keiths: Why are some kinds of information processing accompanied by subjective experience if others are not?

    I think the information metaphor is bogus. Brains don’t process information. They behave. Which I grant is a nebulous term, but it’s the best I have at the moment.

    Most of the terms about brain behavior are inadequate. Stimulus-response is inadequate except for tropisms and understood chemical reactions.

    Computer analogies are just as inadequate. comparing brains to computers is cargo cult science. Making airplanes out of thatch. Mimicking the easily observable outward appearance.

    This is why I say jumping from the failure of computers to mimic brains to discussing the possible failure of atom for atom replicas to “be” human is fishy. These thought experiments are not of the same universe. They are orthogonal, or perhaps they have no intersection at all.

  30. petrushka: I think the information metaphor is bogus. Brains don’t process information. They behave. Which I grant is a nebulous term, but it’s the best I have at the moment.

    That seems like a weird distinction to me.

    I take it all of the following is widely accepted: like all organs, brains have specific functions as a result of past natural selection. What brains were selected for is the coordination perception and action, and they do so by taking up structured energies in the environment via the activation of sensory transducers and using those structured energies to alter the model of the environment that’s implemented through an pattern of activation in and across neuronal assemblies, resulting in down-stream impulses being sent to the muscles.

    If someone doesn’t want to call that “information processing,” I guess I’m not going to fight them but it seems like a weird semantic quibble to have. Why not call that information processing?

  31. Kantian Naturalist: Why not call that information processing?

    Because “information” in the realm of computers and communication implies discreet units that have “meaning”. That is, each unit corresponds to something, and that something is fungible. It can be translated to other media; it can be stored and retrieved; it is independent of the substrate.

    The best illustration I can think of is the experiment in which a processor evolved the ability to discriminate tones or frequencies. The resulting “program” was specific to the physical characteristics of the embedding chip. Not the logical structure, but accidental features that affected timing in unexpected ways, that might not transfer to another chip, and certainly not to a different design that was logically identical.

    Brains are more like that than they are like calculators.

  32. Kantian Naturalist: What brains were selected for is the coordination perception and action, and they do so by taking up structured energies in the environment via the activation of sensory transducers and using those structured energies to alter the model of the environment that’s implemented through an pattern of activation in and across neuronal assemblies, resulting in down-stream impulses being sent to the muscles.

    That’s not information processing.

    The term “information processing” mostly comes from what computers do.

    An accountant balancing the books — yes, that’s information processing.
    Converting a temperature from Fahrenheit to Celsius — yes, that’s information processing.

    What a thermometer does — no, that is not information processing. The output of the thermometer is information (the temperature reading). The input to the thermometer is not information. The thermometer is creating (manufacturing, crafting) information that informs us about the world. The thermometer is in the business of crafting information. It is not in the business of processing information.

    Perceptual systems, likewise, are in the business of crafting information to inform us about the world. They are not processing information.

    The distinction between crafting (or creating) information and processing information is important. The question of whether crafting information results in sensation is very different from the question of whether processing information results in sensation.

  33. I’d like to hear an argument as to why the phrase “information processing”, which was originally used to think about what computers do, should only be used to describe what computers do. If neuroscientists find this concept to be useful, who are we to police their vocabulary?

    Please note: I’m not saying that it’s helpful to think about brains as Turing machines. Not that anyone ever thought they were — the dominant idea of minds as Turing machines in the cognitive science of the 1970s, as developed within philosophy by Jerry Fodor and Hilary Putnam, was appealing to many precisely because it wasn’t reducible to neurophysiology. By contrast, the eliminativists like the Churchlands were inspired by connectionist networks, not by classical Turing machines.

  34. Kantian Naturalist: If neuroscientists find this concept to be useful, who are we to police their vocabulary?

    I’m not trying to police their vocabulary.

    The problem, as I see it, is that “information processing” is being understood as something that happens internally. What I see as important, is the interaction with the external world.

  35. Neil Rickert: The term “information processing” mostly comes from what computers do.

    An accountant balancing the books — yes, that’s information processing.

    Converting a temperature from Fahrenheit to Celsius — yes, that’s information processing.

    That’s data processing rather than information processing.

    Don’t you think there is a difference between data and information?

  36. A non-philosophical problem here is that brains behave the way they do because they are not purely digital or logical. Neurons and their connections are timing dependent, and what they do is incredibly noisy, using the information metaphor. I have thought of neural interactions as analog rather than digital, although synapses fire or not, which superficially looks digital.

    Computer designers do everything in their power to eliminate noise and guarantee that each and every logic circuit and operation follows the intended design. There are exceptions to this kind of design, but the exceptions are not information processing.

    A circuit, whether biological or electronic, whose primary operation is dependent on accidental features of structure, is behaving rather than processing information. We can understand the physics of it’s operation, but we are unlikely to be able to build predictable units of behavior. Such systems will have to learn.

  37. I don’t think a whole lot about the vocabulary of computation, but a quick Google search yields a reasonable distinction between data and information. Data would be the bits being manipulated, and information would be the context that gives the bits meaning.

    Brains can certainly “process” data by doing math or logic or storage or retrieval. But that is a recent kind of behavior, since the invention of writing and counting. Brains lose the John Henry competition with machines.

    I would agree with Neil that people create meaning by interacting with the environment. Rocks and trees and rain and suns do no have aboutness in the absence of people or observing animals.

  38. Neil Rickert: The problem, as I see it, is that “information processing” is being understood as something that happens internally. What I see as important, is the interaction with the external world.

    Maybe the problem here is simply that you don’t want to use the phrase “information processing” because you conceive of that as involving a wholly internal process. But of course interaction with a noisy and chaotic environment is essential to cognition — we all agree here about that. At the same time, presumably neurons are doing something in regulating organism-environment interactions! So the question becomes, why not think of what neurons are doing as processing information in order to regulate organism-environment interactions?

    The worries being canvassed here about information processing rely on the assumption that this idea is only properly used when applied in its original context of computing machinery and telecommunications. I worry that it’s a weird fetishism about the past and a weird originalism about meaning to think that the meaning of a phrase is limited to the contexts in which it was first introduced.

    To be sure, there’s something deeply attractive about the embodied/embedded approaches to cognition that get their theoretical inspiration from Heidegger, Merleau-Ponty, Dewey, and Gibson. (That’s how I got into philosophy of cognitive science myself — via phenomenology and pragmatism.) At the same time, that whole research program tends to put so much of cognition into the world and into the organism-environment interaction that it’s disconnected from any story about what brains are actually doing.

    So even if one wants to say (which appears to me to be mere sanity and nothing too provocative) that cognition involves causal loops between brains, bodies, and environment, and cannot be understood in neuroscience alone, there’s still got to be room for theorizing the neuronal contribution to cognition or the neuronal component of cognition.

  39. The problem is not just terminology. It is the notion that cognition and qualia can be usefully abstracted.

    My personal opinion (worth nothing at all) is that there is no grammar to whatever is going on in the brain. We can map till we are blue, but we will not be able to replicate people, or even mammals.

    Yes, the visual cortex seems pretty regular, and the general sweep of things is mappable, but this knowledge is very grainy. As difficult as abiogenesis, I think.

  40. petrushka: My personal opinion (worth nothing at all) is that there is no grammar to whatever is going on in the brain. We can map till we are blue, but we will not be able to replicate people, or even mammals.

    To be fair, no one here has a worthwhile opinion about any of this stuff (certainly not me either), because no one at TSZ has sufficient expertise in philosophy, cognitive science, and neuroscience. The people who do have the requisite expertise aren’t at TSZ because they are too busy actually doing stuff.

  41. petrushka:

    I think the information metaphor is bogus. Brains don’t process information.

    It’s interesting that both you and Neil have glommed onto that silly notion. I’ve already addressed it:

    petrushka:

    Does anyone think the information “processed” by brains can be quantified or translated into another medium?

    keiths:

    Someone hands you a list of numbers and asks you to add them up and write down the answer. You do.

    There was information in the list. It entered your brain via your visual system. It was processed by your brain, producing the sum of the numbers. Your brain translated that sum into a series of motor commands, causing you to write down the answer underneath the list.

    How is that not information processing, and how is that not a translation of information from medium to medium?

  42. Brains can certainly “process” data by doing math or logic or storage or retrieval. But that is a recent kind of behavior, since the invention of writing and counting.

    “Storage and retrieval” — aka “memory” — have been around a lot longer than writing and math, petrushka.

  43. petrushka,

    This is why I say jumping from the failure of computers to mimic brains to discussing the possible failure of atom for atom replicas to “be” human is fishy.

    That isn’t the argument at all. Not sure where you got the idea that it was.

  44. Neil,

    An accountant balancing the books — yes, that’s information processing.

    Yet you claim that brains don’t process information. Do accountants use their gall bladders for this purpose?

  45. it seems to me that what some people want to say here is that the optic lobe of a diving gannet isn’t processing information as the bird adjusts orientation and velocity.

  46. And to this:

    Neil:

    But it does not describe what is happening when I am bird-watching. If I am bird-watching, then there isn’t any Shannon information involved, other than what my own brain may be producing.

    keiths:

    There’s plenty of Shannon information involved. The hallmark of Shannon information is that it reduces uncertainty in the recipient.

    While wearing a light-tight mask and good earplugs, Nellie is unable to determine what birds (if any) are in her back yard. When she removes the mask and earplugs, she can tell you exactly what birds are in her back yard, and how many of each. Her uncertainty is reduced because the sensory input carries Shannon information.

    The photons impinging on her retinas and the sound waves impinging on her eardrums carry information about the outside world. They reduce her uncertainty about which birds, if any, are in her back yard.

Leave a Reply