At Aeon, philosopher Philip Goff argues for panpsychism:
It’s a short essay that only takes a couple of minutes to read.
Goff’s argument is pretty weak, in my opinion, and it boils down to an appeal to Occam’s Razor:
I maintain that there is a powerful simplicity argument in favour of panpsychism…
In fact, the only thing we know about the intrinsic nature of matter is that some of it – the stuff in brains – involves experience… The theoretical imperative to form as simple and unified a view as is consistent with the data leads us quite straightforwardly in the direction of panpsychism.
…the brains of organisms are coloured in with experience. How to colour in the rest? The most elegant, simple, sensible option is to colour in the rest of the world with the same pen.
Panpsychism is crazy. But it is also highly likely to be true.
I think Goff is misapplying Occam’s Razor here, but I’ll save my detailed criticisms for the comment thread.
Alan:
It isn’t a single concept. Hence the title “Concepts of Consciousness”.
That is my point, indeed!
Neil,
This is true only if you construe “facts” as propositions or something similar. In that case, then of course someone has to create them.
But “fact” can also refer to a state of affairs — for example, that stars exist — in which case it is independent of us.
I see parallels in the usage of “life”. On its own, not a very useful word. I’ll have a look when I get more time. Thanks for the link.
Alan:
Lots of terms have more than one definition, but that’s a silly reason to throw up your hands. Capable readers know how to use contextual clues to narrow down the possible meanings, and good authors will supply those clues or make it explicit.
Chalmers, for instance, identifies exactly the type of consciousness he associates with the Hard Problem:
If you want to know what people mean by a term, it makes sense to read what they write about it.
KN:
Neil:
Good grief, Neil. KN’s question was explicitly about consciousness, in a thread about consciousness:
KN:
Neil:
KN:
How would you answer the actual question?
If this is typical of Chalmers, then it illustrates the incoherence of trying to analyse human thought processes by thinking about them. Doomed to failure. As I keep saying no thinking entity can comprehend another entity as complex as itself.
I love how keiths gets to ask all the questions and give none of the answers! 🙂
There are behaviourists and behaviourists. Skinner studied pigeons without seemingly any insight into, for instance, Kahneman’s “fast and slow thinking” and how the divided brain appears to show a visual handedness, one eye and hemisphere on the routine and the other on the lookout for danger. Skinner might have done better looking at corvids.
Not according to Frege and a lot of other philosophers.
Wow–480 pages!
But I like that there are so many citations of Dretske!
Well, he’s a particular kind of functionalist then. He’s not talking about organizing a bunch of beer bottles to do the same thing a brain does.
You’ve perhaps noticed the difference between deep sleep and being awake. Or between being anaesthetized and being awake.
It’s a noticeable difference.
Glen Davidson
If he gave you the answers you would never learn for yourself .
Actually it was the effect of self reference
Yes. I already pointed out the medical definition is fine. Glasgow.
Hi GlenDavidson,
Yep. That’s intransitive creature consciousness, which is but one of the many varieties of consciousness distinguished by philosophers. From pages 77 to 79 of my thesis:
“What about neuroscientists?” you may be wondering. What varieties of consciousness do they distinguish? From pages 90-92 of my thesis:
So there you go. Defining consciousness is not as easy as it looks. I should mention, by the way, that my thesis was written in 2007, so as you’d expect, there have been further scientific developments since then.
Unlike you, I am not a dictionary literalist.
States of affairs are human artifacts.
Good grief yourself. You seem to take consciousness to be a free-standing concept, completely independent of all other concepts.
Neil:
Um, no.
keiths:
Neil:
Um, no. That there are stars would be true even if there were no humans to point it out.
keiths:
Neil:
Reading about a topic makes someone a “dictionary literalist”?
Um, no.
Neil,
You said that there is a “conceptual problem” with consciousness. KN asked you to identify it. How do you answer KN’s question?
newton:
“Self-reference” was the criterion. Consciousness was the purported effect.
Do you believe that self-driving cars are conscious? They model themselves and their situations, after all.
walto,
Yes. The functionalist kind.
Alan,
That’s a goofy argument.
Chalmers isn’t trying to comprehend an entire person. He’s simply asking how information processing gives rise to subjective experience — phenomenal consciousness.
It’s a dumb question for the reasons stated. Qualia are an imaginary concept and consciousness is not (unqualified) a useful concept.
Alan,
No, and I just explained to you why your argument is silly.
Meanwhile, it’s amusing that you’re inadvertently accusing yourself of incoherence:
So by your own standard, your discussion of awareness is “incoherent” and “doomed to failure”.
That this counts as a state of affairs, depends on how humans delineate what they consider to be states of affairs.
I wrote: “Because it isn’t a problem due to a shortage of evidence. It’s a conceptual problem.”
The words “with consciousness” were not there in what I wrote. That they were not there was intentional.
The evidence is right in front of you, Neil:
KN:
Neil:
The attempt to understand human consciousness by thinking about it rather than working collectively by experiment and observation, yes.
keiths:
Neil:
A state of affairs is just the way things are at a particular time. Stars existed long before humans arose. That state of affairs depended in no way on the availability of humans to “delineate” it.
Alan,
As if Chalmers were arguing against observation and experimentation.
Come on, Alan.
There is no “way things are”. There is only the way that we say things are.
Humans didn’t bring stars into existence, Neil. They existed long before us.
That is not actually relevant.
Nice cutting job. I wonder if you might pick something up in film editing–maybe for toddler movies.
keiths:
Neil:
Sure it is. It shows that your statement is wrong:
walto,
As if your comment about beer bottles were relevant to the question of whether petrushka is a functionalist.
Actually, no. Rather, it shows a problem in how you conceive our relation to the world.
walto,
🙂
I read the description of the hard problem and still don’t see what the problem is. I admit that we don’t have an explanation, but I don’t see that as any special kind of problem.
In his book ‘Saving the Appearances’, Barfield coins various terms relating to how we interact with the world.The following are rough descriptions of his terms. ‘Participation’ is how we deal with the world from within, ‘collective representations’ relates to our understanding of the phenomena we perceive, the ‘unrepresented’ are the things-in-themselves, ‘alpha thinking’ is ordinary everyday thinking and ‘beta thinking’ is philosophical thinking. The machine-like view of the universe which came to fruition in the nineteenth century he called ‘onlooker consciousness’
From a video based on the book, he says:
From a website I have just come across:
What is here called ‘consensus reality’, Barfield had termed ‘collective representation’s.
The present day ‘onlooker consciousness’ which imagines I as subject as separate from an outer world of objects is a temporary stage which will be overcome by ‘final participation’.
Of course it’s relevant, as any reader of Searle would have understood. In any case, behaviorism is generally considered an antecedent to functionalism. https://plato.stanford.edu/entries/functionalism/#Beh
That’s an interesting article, and I will need some time to read it and understand it, but I do not regard brains as Turing machines. Absolute determinism is posited by physics, but within the realm of the possible, one cannot determine what a brain will do by analyzing its state. Even the early behaviorists spoke of probabilities rather than of predictable outcomes.
There is a rather crude concept of AI that involves the imitation game. Build a machine that can play chess, or a machine that can pass for a human interlocutor.
That is not what I would call functionally equivalent, although such machines can be very useful, and within their scope, vastly better than humans.
This seems to be the central problem addressed by early behaviorists, and their “solution” was to declare it off limits to study. I think that was reasonable at the time, because there was no technology with which to probe such phenomena.
The solution is still over the horizon, but I think we have a direction to travel. We know enough about brains to emulate bits an pieces. Biology has a several hundred million year head start in building brains, and I do not expect to see anything that I would call AI. It’s equivalent to solving the problem of first life.
But I’m comfortable in believing that a necessary component of “intelligence” is the ability to evolve behavior, and that behavior evolves both at the biological population level and at the neural connection level. Machines that do not and cannot do this are not candidates for being called AI, and will not evoke questions about whether they “experience” anything. So for the moment, questions about artificial qualia are moot.
Just to note that the behaviorist solution was to say that all mental states are off-limits to empirical explanation, and not just “qualia”. Mental states are usually taken to be representations, or to be about things. Qualia are usually understood as non-representational states of feeling, of sheer awareness.
Chalmers et al depend on this idea that we can conceive of a distinction between the representational and non-representational character of mental states, so we can conceive of beings that have all of our representational states but none of our qualia. That’s crucial to their intuition that qualia are metaphysically weird and require an explanation that goes beyond what any cognitive science could provide.
The breakthrough with functionalism was to show that we can talk about mental states as representational states of the system, from a third-person or objective standpoint, on analogy with computational states of computing machines.
I recently read two fascinating papers that bear on this: The cognitive neuroscience revolution and From symbols to icons: the return of resemblance in the cognitive neuroscience revolution. (Sadly both papers are behind paywalls but I can send PDFs to those interested.)
The gist is that the early functionalists were interested in computer science as a conceptual framework for cognitive science, which meant that they thought about mental representations as symbols without any regard for implementation. When neuroimaging (first CAT, then MRI and fMRI) improved to the point where neuroscience could be integrated with cognitive science, there was a slow shift from thinking about mental representations as symbols to neural representations as icons. One important consequence of this shift has been in the status of neural representations. Whereas for the functionalists mental representations were mere posits, it has been argued that neural representations have been directly observed (see Neural Representations Observed).
See how? It’s behind paywall. I want to debunk it, but not by giving them money…
ETA: Oh, here’s the pre-print https://www.academia.edu/36032960/Neural_Representations_Observed Let’s take a look.
I do not accept the concept of neural representations. What brains do is behave.
There is no representation of external objects or events in brains. That is not a philosophical position, it is a fact. (Facts can be wrong, I admit, and I can be wrong.)
But claims about seeing representational neural activity bring to mind claims about lie detectors. There is always someone claiming that they finally have broken the code, and neural activity X corresponds to some objective mental state.
petrushka:
Suppose I were to abduct you and take you by black helicopter to an undisclosed location run by the CIA, where you were asked to draw the rough floorplan of your house. I suspect you’d be able to do it; most people would.
Your house is an external object. If it is not represented in your brain, how are you able to draw the floorplan from the inside of a CIA lab, where your house is not in view?