At Aeon, philosopher Philip Goff argues for panpsychism:
It’s a short essay that only takes a couple of minutes to read.
Goff’s argument is pretty weak, in my opinion, and it boils down to an appeal to Occam’s Razor:
I maintain that there is a powerful simplicity argument in favour of panpsychism…
In fact, the only thing we know about the intrinsic nature of matter is that some of it – the stuff in brains – involves experience… The theoretical imperative to form as simple and unified a view as is consistent with the data leads us quite straightforwardly in the direction of panpsychism.
…the brains of organisms are coloured in with experience. How to colour in the rest? The most elegant, simple, sensible option is to colour in the rest of the world with the same pen.
Panpsychism is crazy. But it is also highly likely to be true.
I think Goff is misapplying Occam’s Razor here, but I’ll save my detailed criticisms for the comment thread.
I was wondering where Dennett “pretends”.
You will keep wondering forever. It takes knowledge of character to know it when you see it.
I mean I’d like to see the claim Dennett is “pretending” supported. Where in your 1 hour video is Dennett pretending? Or is it all pretence?
Alan Fox,
Oh, my bad. I forgot for a while what a hopeless reductionist you are.
The video shows that Dennett pretends that his talk about memes and deepeties and such yield a scientific theory of consciousness that leaves nothing unresolved.
If you want to see him just pretending, perhaps like a clown or singing Great Pretender, I have no material support for that.
Alan Fox is being a good literalist.
What’s the evidence that Dennett was pretending?
Well, that he’s spouting nonsense that was supposed to be scientific. Oh, but maybe he believes in it, just as he believes that consciousness is an illusion.
Distinction without a difference, that only the literalist would care about. And don’t forget, Dennett’s a Famous Wise Man who gets to spout nonsense without being called on it by the many non-skeptics (many of whom claim to be skeptics) who tremble and fear before Famous Wise Men.
Glen Davidson
I wasn’t even trying to address how you see consciousness, and I never said or implied that the spectator theory of knowledge was the only way to think about consciousness. I was only pointing out that the hard problem of consciousness arises because of specific theoretical moves that seem ‘natural’ to the philosophers who invented that problem: Thomas Nagel, Saul Kripke, and especially David Chalmers. If you don’t want to talk about what they’re talking about, that’s fine with me.
I only brought it up because it seemed to me that one would have to already be convinced that The Problem of Consciousness is a Very Hard Problem indeed in order to think that panpsychism is a solution to it. I posted the exchange between Dennett and Strawson for the same reason — because Strawson has become convinced that panpsychism is a viable solution to the hard problem of consciousness.
Why is that interesting? Lots of philosophers and scientists make important steps and also missteps. Dennett has been an influence on me, and so have been philosophers he’s had productive disagreements with. At the end of the day I might even disagree with him more than I agree with him, but I’m always better for having taken the time to think through his arguments.
I’m not entirely disagreeing, but how we describe the difficulty does depend on whatever our background theory of cognition is. I mean, if we’re trying to articulate an intuition that somethings can’t be explained, then how we articulate that intuition will depend on what it is that we think we can explain!
I don’t think that rejecting functionalism for a much more embodied/embedded or enactive view eliminates the conceptual and epistemic ‘gap’ between first-personal, phenomenological description of lived experience and the third-personal scientific explanation of causal regularities. But it does change how we conceptualize that ‘gap’.
And bring out the glory glory!
KN:
Yet you wrote this yesterday:
You’ve completely contradicted yourself within a space of hours.
If he’s a Spinozist, they’re probably ideas of (perhaps changes in) their own “bodies.”
I think those are good questions.
Only if I thought that the hard problem of consciousness, in Chalmers’ sense, was the best or only way of thinking about the ‘explanatory gap’ (which is not really explanatory, but ok) between science and phenomenology. I don’t think that. In fact I think that the impossibility of giving a functionalist explanation of qualia is a completely mistaken approach to thinking about the relation between cognitive science and phenomenology.
Really? Would you say the same if we were were talking about an insulin-producing system instead of a neurocomputational system?
I do tend to take the words that people use to describe their ideas, thoughts and opinions literally.
Now I read that as confirming your dismissal of Dennett’s views (that consciousness is a flawed concept) as pretense as the same sort of remark as “Oh, atheists aren’t really disbelievers – they just hate God”.
So you don’t agree with Dennett on consciousness. Fair enough.
Not that I’ve watched it all but I didn’t get that impression from what I did watch. You seem to be confirming that you claim the whole video is a pretense on Dennett’s behalf.
There seems to be quite a lot more of Dennett pretending. Here’s Dennett and a Ted Talk “The Illusion of Consciousness”.
Btw, when Dennett says that consciousness is an illusion, he means something very specific: that consciousness as understood within the manifest image is an evolved user-illusion. The denial is not that we are conscious (despite the frankly malicious attacks launched against him by philosophers like Nagel and Strawson) but that the manifest image is a reliable guide to what consciousness is.
Alan Fox,
Yes, I have already watched a bunch of those. Thank you very much.
That we can understand ourselves less well than we imagine and that a third-party (experimental) approach can demonstrate the inadequacy of the first-person approach.
I watched that one I linked to, thought I must have seen it before, then realized I’d read it in From Bacteria to Bach and Back
Dennett also says that free will (another user-illusion) is as real as colors, promises and euros.
Is it not a pretentious or deliberately provocative way to put it that all those things are illusions? First, it should be pretty obvious that you can’t deny that those things exist, whereas Dennett should be among the first to know that physicalists/naturalists (like Alan Fox) tend to equate illusion with non-existence.
Second, free will cannot be institutionally reformed, established or abolished like euros can, cannot be given and broken like promises can, and cannot be associated with a given physical wavelength like colors can. So where’s the point of analogy? Looks like another pretentious rhetorical move.
KN:
Yes. Alan’s position is bizarre, and certainly not something that Dennett would support.
I think so, yeah. I think it’d be weird to separate the organism from that system.
Same literalistic nonsense.
Well, why don’t you just go on believing that. It would be too much to expect you to admit that you’re being literalistic where people are writing loosely and metaphorically. Normally, in other words.
Glen Davidson
KN,
The Hard Problem is the problem of bridging the explanatory gap. So when you wrote this…
…you were in fact contradicting your earlier statement:
In any case, your earlier statement is incorrect, as I explained earlier. The explanatory gap persists whether or not functionalism is true, so the easy/hard distinction does not rely on assuming functionalism.
Also, you spoke of “rejecting functionalism for a much more embodied/embedded or enactive view”. That indicates some additional confusion on your part about functionalism, which is not incompatible with “embodied/embedded or enactive views.”
Certain things in the world have subjective mentality. That is for sure. Does consciousness only pop into the universe suddenly and then for no reason when brains reach a certain level of complexity? Such a notion seems quite absurd.
If subjective conscious entities are indeed natural elements within the universe we can try to find where they exist only if they contribute some unique causal agency.
I speculated in the following paper where we may locate the natural conscious entities. https://philpapers.org/rec/SLESA
I simply disagree. I take the Hard Problem of Consciousness to be what Chalmers said it was when he coined the term, which he couches precisely in terms of the impossibility of explaining qualia in terms of cognitive functions. If you want to use the term “the Hard Problem of Consciousness” to mean “the explanatory gap,” then OK.
The Hard Problem of Consciousness is not the puzzle of “how we get from cognitive science to phenomenology?” (Ray Jackendorff calls this “the mind-mind problem”: the relation between the computational mind and the phenomenological mind.) It’s a specific position: that it is impossible to explain phenomenal consciousness in computational, cognitive-scientific terms. Conversely, if we had a complete and comprehensive cognitive neuroscience, there would not be any explanation of phenomenal consciousness. This claim — that a cognitive-scientific explanation of phenomenal consciousness is impossible — hinges on Chalmers’s argument for how we can infer possibility from conceivability. And that in turn depends on some technical issues in philosophy of language.
It depends on the details. It’s probably fair to say that predictive processing is an embodied/embedded functionalism. But enactivism is widely construed as committed to anti-representationalism, and functionalism is a theory of mental states as representations. (Though it may be possible to resolve this disagreement at a theoretical level.)
Welcome to TSZ!
Indeed. One of my objections to “consciousness” is that it often suggests a false dichotomy of having it or not. “Awareness” is a better way of thinking about cognitive abilities. There’s a continuum and it is less misleading to talk of levels of awareness.
Mentioning awareness reminds me that what I might have been attributing to Dennett (I’ve just been rereading Beautiful Minds) was picked up from Michael Graziano in Consciousness and The Social Brain
I agree that we should be talk about levels or degrees of awareness, but I’ll confess that I don’t share your intuition that the word “awareness” conveys gradations whereas the word “consciousness” does not.
KN, to Alan:
Nor I. It makes perfect sense to talk about someone “drifting in and out of consciousness”, for instance.
Alan Fox,
I found this line interesting from the description: “One function of this circuitry is to attribute awareness to others: to compute that person Y is aware of thing X. In Graziano’s theory, the machinery that attributes awareness to others also attributes it to oneself.”
I think that’s really interesting and probably right, but with one crucial revision: I’d say that the machinery that attributes subjectivity to others also attributes it to oneself. We apply the intentional stance not just to others but to ourselves.
A slightly better way of putting it would be that we learn how to navigate social environments by mastering the vocabulary of agency and propositional attitudes (e.g. beliefs, desires, wishes, thoughts), but we in doing so we apply this vocabulary to ourselves as well as to others. And that’s just what it is to be a competent intentional agent — the kind of being that understands itself and others in terms of the vocabulary of agency and propositional attitudes.
But in light of that, it’s gotta be –as Dennett pretty much says — a category mistake to understand what’s happening in the brain in terms of what’s happening at the level of social dynamics. That’s a conflation of the personal level and the subpersonal level. When we do cognitive and affective neuroscience, we’re not going to be find any patterns of neuronal activity that map neatly onto psychological states, because the function of folk-psychological vocabulary is for navigating social spaces, not for disclosing what’s really going under the hood.
I guess that puts me somewhere between Dennett and Churchland on a lot of these issues.
KN:
keiths:
KN:
Come on, KN.
From the Wikipedia article on the explanatory gap:
From the Internet Encyclopedia of Philosophy:
Scholarpedia:
From the Stanford Encyclopedia of Philosophy:
From Chalmers himself, in the original paper:
KN,
No, it isn’t. Functionalism is the idea that the functional role of a mental state is what matters, not the way that function is implemented or the substrate in which it is implemented.
That’s the third misconception about functionalism that you’ve expressed in this thread. Why are you finding the concept so difficult to grasp?
KN,
If you want to work in philosophy of mind — and you’ve told us that you do — then you can’t just wing it. You’ve got to buckle down and learn the basic concepts, including things like compatibilism, the Hard Problem, and functionalism.
There’s no shortcut.
lorenzosleakes:
Hi, Lorenzo. Welcome to TSZ.
In your paper, you attribute consciousness to elementary particles, such as electrons. What do you think electrons are conscious of, and how could we ever test that idea?
I understand all this stuff far better than you do or ever will, but I’m sick and tired of how every conversation with you devolves into a dick-measuring competition.
Kantian Naturalist:
How dare I accuse The Great KN of misunderstanding compatibilism, functionalism and the Hard Problem!
Get over yourself, KN. You’ve misunderstood some basic concepts in the philosophy of mind, and you’ll need to rectify that if you intend to work in the field.
Try to respond constructively instead of shooting the messenger.
Without attempting to judge whether or not functionalism is true or not, the implementation of the concept reminds me of FSCIO. It’s one of those map/territory things.
What matters is not what we think about substrates, but whether, in fact, we can implement mental states (or more specifically, the behaviors that lead us to say there is a mental state) in non-biological substrates. Show me the beef.
Produce an example, and the philosophical debate becomes superfluous. Actually, it is superfluous anyway.
petrushka:
If you’re skeptical of that, then you’re not a functionalist.
It wouldn’t, actually. To borrow your earlier example, consider an android on a par with Star Trek’s “Data”. If such an android were ever constructed, people would be saying “Impressive, but is he really conscious? Are his “mental states” really mental states? Or is he just an elaborate simulation of consciousness and thinking?”
My “misunderstandings”, as you call them, consist entirely of the fact that I don’t always use words in ways that are consistent with whatever education you’ve given yourself by reading some encyclopedia articles on the Internet.
KN,
Your misunderstandings consist of not understanding the concepts.
Learn them. They are important.
That’s exactly what I’m addressing. That kind of question belongs to an obsolete era of philosophy and theology. The questions are unanswerable and therefore unproductive.
Now, something like Star Trek Data will not pop fully formed from the head of Zeus or IBM. It will evolve, and we will get the same kind of useless questions that are asked of cats and dogs and apes.
Quite frankly, I cannot be certain that the posters at this site are not bots. I judge them not to be, but that is based entirely on behavior.
petrushka,
That’s a bit premature. Philosophers and scientists don’t give up so easily.
You can learn a lot by thinking carefully about a question, even if you don’t end up answering it.
I like that switch from “mental state” to “behaviors that lead us to say there is a mental state”.
The expression “mental state” should be removed from the vocabulary used by philosophers. Talk of mental states muddles the issues.
I would hazard the claim that there’s no question who’s the bigger dick, but there’s a (slim) chance a moderator might disapprove.
It depends on the character of the talk — confusing talk about mental states muddles the issues, clear talk about mental states does not. I don’t think that one is implicitly committed to dualism simply by using the term “mental state,” and forbidding use of the term isn’t going to lead to any progress in philosophy or psychology.
…says walto, who often blows a gasket, like KN just did, when his mistakes are pointed out to him.
Neil:
KN:
Right, and forbidding the use of the term “mental state” would impede — not improve — discussion. It would be a return to behaviorism, essentially.
Who’d you think I meant?
Let’s see.
Consciousness has a perfectly good medical usage. Deeply unconscious, barely conscious etc. But this is not what I am suggesting as a phenomenon shared across living organisms. Consciousness doesn’t really have a generally accepted meaning when talking about other species. Are dogs conscious? Cats? Bacteria? I think I can argue that there is a very primitive level of awareness when a flagellate bacterium employs run-and-tumble strategy to maintain itself in optimal nutrient concentration.