At Aeon, philosopher Philip Goff argues for panpsychism:
It’s a short essay that only takes a couple of minutes to read.
Goff’s argument is pretty weak, in my opinion, and it boils down to an appeal to Occam’s Razor:
I maintain that there is a powerful simplicity argument in favour of panpsychism…
In fact, the only thing we know about the intrinsic nature of matter is that some of it – the stuff in brains – involves experience… The theoretical imperative to form as simple and unified a view as is consistent with the data leads us quite straightforwardly in the direction of panpsychism.
…the brains of organisms are coloured in with experience. How to colour in the rest? The most elegant, simple, sensible option is to colour in the rest of the world with the same pen.
Panpsychism is crazy. But it is also highly likely to be true.
I think Goff is misapplying Occam’s Razor here, but I’ll save my detailed criticisms for the comment thread.
The most obvious problem is that Goff’s argument involves a category error. Ascribing rudimentary consciousness to electrons and other particles just means that any macroscopic object: a rock, a typewriter, a brain, the Moon — is a collection of conscious particles. It says nothing about why living brains are conscious in a unified way, or why that consciousness can be disrupted by general anesthesia or a blow to the head.
Well at least he’s not asserting that the universe is a conscious mind in this article. It was supposed to explain “fine-tuning.”
But as expressed in the article, at least, it’s still rather useless, ignoring the fact that much of the brain is apparently unconscious at any time, and, of course, how consciousness can be partial, reduced, or intensified by certain states and drugs. While it may be true that there’s a kind of internal register that is collectively turned into consciousness in the brain, clearly it’s far from what we actually call “consciousness” in ourselves, if this internal register does exist.
“Panpsychism” is another word that’s hardly more than an invocation that supposed to explain without, you know, actually explaining. It’s not necessarily wholly wrong, but it’s hardly going to explain why consciousness goes on and off, and waxes and wanes, within our own experiences.
Glen Davidson
Evidently, Goff is a creationist.
Well, okay, he probably isn’t. But his style of thinking seems to have been borrowed from creationism.
Goff completely fails to show why panpsychism would explain consciousness. Maybe electrons are conscious, or maybe they aren’t. Until there is a demonstrated connection between electron consciousness and human consciousness, there’s no argument there.
Okay, I see that keiths is calling that a category error by Goff. I’m more inclined to say that it is just mushy thinking.
Glen:
Wow. That article (also by Goff) is a doozy. A few money quotes:
And:
And:
Neil,
It’s a category error, because the question of whether particles are individually conscious is orthogonal to the question of whether conglomerations of particles possess a unified consciousness, and why.
If electrons possess a rudimentary consciousness, then the electrons in an anesthetized brain are conscious, just like those in a brain that is fully awake. Their consciousness — or lack thereof — is orthogonal to whether the brain as a whole — anesthetized or awake — is conscious.
Well, I can certainly see the appeal of the idea that metazoans didn’t bring forth some amazing phenomenon never witnessed before for billions of years on the planet de novo, but just tapped into and harnessed a commodity that was present all along. That being said, I agree with you that it is going to take quite some effort to explain how elementary particle awareness is being concentrated in nodes of neurons to produce unified individual consciousness .
I wonder: What, in Goff’s view, are electrons actually conscious of?
The essay gives no hints.
Agree 100%:
Classic ‘God of the Gaps’ mentality. While he’s arguing in a different direction than Creationists altogether, his moves have been taken from a familiar playbook.
“In fact, the only thing
we knowmaterialists believe about the intrinsic nature of matter is that some of it – the stuff in brains – involves experience.”Corrected his statement to account for his unstated ideological assumption.
1. What materialists believe that the only thing we know about the intrinsic nature of matter is that some of it involves experience?
2. You seem to be suggesting that Goff is a materialist? What’s your basis for that?
3. What the hell are you talking about?
Here’s another article on panpsychism, this time at Quartz:
It isn’t by Goff, but unfortunately, it is largely about Goff’s views.
Relevant to this discussion, Dennett vs Strawson in New York Review of Books here.
As I see it, Goff makes quite a few fundamental blunders.
1. The most serious is that he takes not only seriously but also literally this idea of “interiority.” Sure, it seems to us that there’s some intrinsic quality to experience. But is there? Would a scientific explanation of consciousness vindicate the intuition — or undermine it? (My money is on the latter.) But without some really compelling reason to trust our “intuitions” about subjectivity, Goff isn’t entitled to draw any of the rest of his conclusions — esp when his whole point is that our intuitions are often not to be trusted! [Relevant: A revolution in our sense of self, though arguably Chater has repackaged insights from Hume and, before him, Buddhist ‘no-self’ philosophy of mind.]
2. Does physics really deal with relations and not with intrinsic properties? Maybe. I don’t do philosophy of physics and I’m in no position to say. But it seems entirely plausible to me that, if physics has no room for intrinsic properties, then maybe the conclusion we should draw is that there aren’t any intrinsic properties? Maybe the very idea of “intrinsic natures” is itself a philosopher’s illusion?
WJM:
walto:
William has gone off half-cocked again. Goff is not a materialist.
I fail to see why consciousness requires explaining.
petrushka:
I fail to see why you fail to see why consciousness requires explaining.
What do you mean by explanation?
If we produced a being like the Star Trek Data, would a description of its operation suffice as an explanation?
Pan-fartism is crazy, but it’s probably true. Otherwise there would be a discontinuity between things that fart and things that don’t fart. The most parsimonious explanation is that quantum particles, electrons, etc, fart.
ETA: That explains why Goff seems to be pulling those arguments out of his ass. It’s just that his brain is farting.
petrushka,
That’s what Chalmers calls the “Easy Problem”. Are you familiar with the Easy Problem/Hard Problem distinction?
Even Dennett, a skeptic of the Hard Problem if there ever was one, sees a need to explain why he thinks the Hard Problem is illusory. He recognizes how powerful the intuitions behind it are.
That’s not really a fail.
Entropy:
Goff would presumably reply that farts are empirically detectable, and that experiments have failed to detect electron farts. (Not to mention that farts are inherently macroscopic phenomena.) Thus panfartism clashes with observation.
Panpsychism, on the other hand, does not clash with observation. The problem for Goff is that panpsychism makes no distinguishing predictions, so of course it doesn’t clash with observation.
That leaves him nothing to fall back on except for Occam’s Razor, and he misapplies it.
keiths,
Pan-fartism doesn’t clash with observation either. Didn’t you notice that Goff’s brain farted that crap?
We cannot detect electrons farts because they’re tiny, like the energy they use for being conscious. So infinitesimal that it escapes detection.
Pan-everything is crazy but it’s probably true.
Regarding easy vs hard, I fail to see why l don’t know, but we’re working on it is not the best available response.
I fail to see what philosophy could add to a description of what configuration of “matter” is necessary and sufficient to produce the behaviors we associate with consciousness.
Animals with brains observe things. Humans observe themselves observing. It’s an additional layer, but not a new or different kind of behavior that requires an additional kind of explanation.
Entropy:
But Goff doesn’t claim that they use energy for being conscious. Remember, he’s not a materialist.
petrushka:
That’s quite different from your earlier statement:
Now you’re acknowledging that an explanation is required, and that we’re working on it.
petrushka:
Again, that’s the Easy Problem, and that’s why it got the name!
The Hard Problem asks: Why does subjective experience accompany the information processing that takes place in our brains?
Consciousness still uses a lot of energy regardless of his beliefs. So, whatever he thinks about this is inconsequential.
There is no hard problem, just very poor philosophy. The experience is subjective because consciousness is an activity performed by subjects. It’s fucking obvious. It’s like asking why when a bat flies I am not flying too?
True. But there’s still the question whether this is a real problem, one that needs to be taken seriously and answered.
Dennett, for his part, has argued that the very idea of “qualia” is something that we don’t need to entertain seriously. (This is, incidentally, one of the few major difference between him and Sellars. But Sellars’s view on qualia is either incredibly subtle or incoherent.)
Dennett has a nice argument against qualia in a couple of places, including here. Pat Churchland has a different but complementary argument here from 1996.
I would add that the easy/hard problem distinction relies on assuming functionalism about cognition, which is deeply problematic in some ways. (The idea seems to be that we just know that functionalism cannot possibly explain qualia, because zombies.)
Hi keiths,
It’s not often that we find ourselves arguing on the same side, but I completely agree with you that the Hard Problem of Consciousness is a genuine one.
Kantian Naturalist mentioned Daniel Dennett’s and Pat Churchland’s arguments against qualia. Another well-known eliminitavist is Alex Rosenberg, author of An Atheist’s Guide to Reality. Thomist philosopher Ed Feser has responded to him at length, in a series of posts.
I would say more, but I need to get some shut-eye, as I have to rise and shine in four hours. Cheers.
Vincent,
I’d be interested in hearing what else you have to say on the subject, when you have time.
KN,
I think we end up with a hard problem either way. Either the Hard Problem is real, or we have the hard problem of understanding exactly why the Hard Problem isn’t Hard!
I’m conflicted on this issue. I reject Block’s “access consciousness” vs. “phenomenal consciousness” distinction, because I think that anything that qualifies as phenomenal consciousness must be a form of access consciousness, or at the very least isn’t separable from it.
I’ve considered qualia as an epiphenomenon of certain kinds of information processing, but that runs into problems of its own.
I’m quite willing to consider that qualia aren’t what they seem, and that (to borrow Dennett’s metaphor) they are like a magic trick played on us by our brains.
But if it is a trick, I don’t understand how it works.
I should probably do an OP on this, if for no other reason than to organize my thoughts and to clarify the problems for myself.
KN,
I’m not so sure about that. Suppose that functionalism were false and that there were something special about the wetware of the brain that couldn’t be duplicated in another medium. You’d still have the question “why does the wetware give rise to subjective experience, instead of proceeding without it?”
In other words, the Hard Problem arises not from functionalism, but from the difficulty of explaining the first-person character of subjective experience in terms of the third-person facts of some system’s physical operation.
(And to posit a magic soul/spirit/whatever that just happens to be conscious, as the substance dualists do, is “soul of the gaps” thinking. It also clashes with the evidence and raises the interaction problem.)
Entropy:
You’re talking about electron consciousness, remember?
Goff’s argument is poor, but it’s not because it’s tantamount to “panfartism”.
Entropy,
Um, no.
Um, yes.
Well, that and the tendency to mystify consciousness.
I’m inclined to disagree. I find the idea that “consciousness” is another* human construct quite persuasive.
*long list might be appended! 🙂
Easy or hard, consciousness “requires” an explanation in the same way any other phenomenon requires an explanation, and the only satisfactory kind of explanation will be of that same kind as other scientific explanations.
Lets back up a bit.
Do “lower” animals experience anything (or do they appear to)? Do they experience anything about themselves? Do they itch and scratch where they itch? Do they request scratching from other individuals?
Is there a continuum of awareness? Are animals aware of things that may happen in various situations; are they aware of consequences? Do they exhibit conflict in choices? Do they appear to have emotional responses to conflict situations.
Are they aware of their own bodies and of their hands an limbs? Do they track their own movements when reaching?
Now, is there a difference in kind between awareness of situations, sensations, movement toward objects, and awareness of one’s own awareness?
Seriously. What physics is postulated that can account for rudimentary awareness, but not awareness of self and of one’s own mental states?
And persuasion is a self-made construct by yourself for yourself. So drop it. Don’t go by those constructs.
I have the same question. What physics is it?
When science says nothing about that thing, then what sort of blunder is it? A scientific blunder? Perhaps it only seems a “fundamental blunder” to you, when the real blunder is on you.
And when science says nothing about it, why are you putting money on it? What sort of philosophy is this?
Well you could ask the same thing about the extrinsic phenomena. Christ, we have much better reason to believe in the intrinsic quality than the extrinsic “facts” that are known via intrinsic qualities–and quantities.
Not if you don’t recognize that it exists in the first place. Of course you’d never get to a scientific theory of consciousness, but you could pretend, like Dennett does with both consciousness and memes.
Well I don’t even know what you mean by “intuitions” about subjectivity. I’m hardly worried about the reality of consciousness, however, and find objectivity much less believable than subjectivity.
And we have what reason to think that our abstractions have anything to do with intrinsic properties? Is quantification and quality the same thing in your philosophy?
You ought to have some idea of what an abstraction is. 660 (or so) nm is not red, it’s just a wavelength that we perceive as red.
If you don’t understand abstraction.
Whatever, I don’t care about the term per se. What’s clear is that consciousness precedes abstractions like physics.
Glen Davidson
Indeed! And, I would suggest, the burden is on those who claim consciousness is a thing to come up with a definition of consciousness.
OK
Yes!
Yes, definitely! Awarenesss is a much clearer concept.
Yes.
Yes.
One is limited to one’s own level of awareness. How can any entity comprehend a level of awareness beyond that of itself?
Life is only physics.
Where does Dennett do this?
Erik:
In Goff’s case, it’s new physics. Electron consciousness is not part of the Standard Model, after all!
The problem is that he offers nothing to justify the new physics other than a misapplication of Occam’s Razor.
I think that knowing something about how cognitive functions are implemented by neurocomputational structures will help us understand why “qualia” are a myth.
Think of it this way: a neurocomputational system is guiding an organism through the world by using the incoming sensory flux to alter behavior so as to maintain homeostasis and allostasis. There’s the neurocomputational system functioning as a proxy model of the world, and then the neurocomputational system has a sub-system that functions as a proxy model of itself. Suppose then the sub-system queries the system — “what state are you in?”. But perturbing the system in that way also perturbs the system as a whole, including the subsystem that’s doing the probing of the system by sending a query to it. All of that is going to redirect energy and time away from maintaining allostasis and homeostasis, on a limited energy budget. (This is why it’s important to take neurobiology seriously in ways that cognitive science of the 1960s and 1970s rarely did.)
If we’re looking for “qualia” or “facts of consciousness” then it’s going to look, from a cognitive neuroscience perspective, like we’re asking, “can the modeling subsystem determine if the states of the system are the same or different when it’s being actively modeled vs when it isn’t?”
Put otherwise, the very idea of “qualia” depends on this deeper intuition that we can apply the spectator theory of knowledge to ourselves: to know what something is is to just look at it, without any interference or manipulation, and then we can apply that to our own mind.
If you think that the spectator theory of knowledge is right, and you think you can apply it to your own mind so that you can just see what the contents of consciousness are, and you just know that the contents of your mind just are what you see them to be, then you’ll find something irresistible about the Hard Problem.
KN,
Perhaps, but again, that’s independent of whether functionalism is true.
Also, it’s interesting that you regard functionalism as “deeply problematic.” Dennett is a functionalist, yet you seem to think he’s on the right track.
Again:
That difficulty is there whether or not functionalism is true.
Kantian Naturalist,
Why are you unable to think of consciousness and/or qualia in any way other than your spectator view of it? It’s certainly not how I see it, which, by the way, you already misconstrued badly in the past.
But you just persist in your “spectator” false dilemma, probably because Famous Wise Men have fed you your biases.
Glen Davidson
https://youtu.be/D_9w8JougLQ
Know your memes.
Seriously, who is guiding who? Why would an organism need guiding and why would a neurocomputational system take up the task? Is a neurocomputational system something other than organism? Is an organism minus neurocomputational system still an organism? And a neurocomputational system minus organism still a neurocomputational system?
Something is not adding up here.