At Aeon, philosopher Philip Goff argues for panpsychism:
It’s a short essay that only takes a couple of minutes to read.
Goff’s argument is pretty weak, in my opinion, and it boils down to an appeal to Occam’s Razor:
I maintain that there is a powerful simplicity argument in favour of panpsychism…
In fact, the only thing we know about the intrinsic nature of matter is that some of it – the stuff in brains – involves experience… The theoretical imperative to form as simple and unified a view as is consistent with the data leads us quite straightforwardly in the direction of panpsychism.
…the brains of organisms are coloured in with experience. How to colour in the rest? The most elegant, simple, sensible option is to colour in the rest of the world with the same pen.
Panpsychism is crazy. But it is also highly likely to be true.
I think Goff is misapplying Occam’s Razor here, but I’ll save my detailed criticisms for the comment thread.
KN,
Yes, we can — and the fact that we can conceive of p-zombies is what leads to the hard problem. The question is whether p-zombies are actually possible in our world, and if they aren’t, to explain why.
I’m unaware of anything in our “best available cognitive science and mindfulness practices” that settles the question. Did you have something particular in mind?
walto,
You’re basically repeating Comte’s error. Comte assumed that we would never be able to determine the composition of stars. You are assuming that we’ll never be able to understand the nature of consciousness well enough to solve the hard problem.
Comte’s position was a philosophical one. So is yours. Epistemology is philosophy.
Truing Test, level II, at least. What, for example, do we mean by red? Do we include sensations of red produced by Fechner Benhem wheels?
keiths:
petrushka:
How will we know that something that passes a level II Turing Test is actually experiencing the qualia it attributes to itself?
Your test basically assumes that p-zombies are impossible. How do you know that?
walto,
To what extent — in your view — is the distinction between empirical questions and legitimately philosophical questions itself alterable over time? Do you subscribe to a version of the analytic/synthetic distinction? (I do myself. Just wondering if you do.)
I don’t understand the question. How do I know that you do?
I don’t.
We make assumptions based on observed behavior. I make assumptions about my cat if he limps and cries. Fortunately for the cat, he doesn’t communicate via teletype.
I’m not assuming it, I’m asserting it. That is my belief. I don’t think it is an empirical question. Incidentally, Wittgenstein made the same mistake as Comte regarding the backside of the moon. As indicated, it’s important to be able to tell the difference between categorial and empirical questions.
To be honest, I’ve never understood why philosophical zombies are supposed to be conceivable. One sees this claim made all over the place, and I just don’t see it. Maybe I’m just not trying hard enough, or my imagination is defective, but I’m not able to conceive of philosophical zombies.
I do understand that there’s a kind of dis-articulation or lack of integration between lived experience as described in phenomenology (also in art and literature) and causal explanations in cognitive neuroscience and related disciplines. I hesitate to call this ‘the explanatory gap’ as this suggest a gap between two different kinds of explanation, whereas I don’t think that phenomenology is any kind of explanation.
The question, though, is this: does my cognitive grip on the discrepancy or incongruity between cognitive science and phenomenology show that I should find it conceivable that something could have all of my cognitive and conative functional structures yet with no felt experiences?
That’s where I find myself not able to get on the bus with Chalmers, and it’s also why I really wanted to tease apart the explanatory gap from the hard problem of consciousness. I simply cannot conceive of philosophical zombies. I don’t know what my problem is.
Then there’s also the question of whether conceivability entails possibility. Descartes and Chalmers both think it does, but their arguments are complicated and it’s not clear to me if they are sound. Descartes’s argument depends on his proof that God exists, since it is only if God exists that He could bring it about that anything I can conceive of actually exists, and hence anything I can conceive of is possible. Chalmers’s argument depends on some highly technical issues in formal semantics, which are interesting to be sure, but I think that Chalmers’s entire way of doing semantics is fundamentally misguided (in brief: it’s a version of the myth of the given insofar as it assumes that we can have a secure cognitive grip on intensions independent of social practices).
Here’s a related problem: how do you know when you’ve succeeded in conceiving of something? Can I conceive of a square circle? In one sense, maybe: I can say that I’m conceiving of something that has the properties of being a square and of being a circle. I can’t construct the geometric figure in my mental imagery, but that doesn’t matter, since conceiving is not imagining. For all I can tell, philosophical zombies are like square circles: it’s just not possible for us to separate cognitive function from awareness, and anyone who tells you otherwise has a bridge in Brooklyn they want to sell you.
That settles the question? No, not at all. But here’s a way of thinking about it: how much cognitive functioning should we expect to find without any accompanying awareness? Consider blindsight: people report no visual experiences but they can guess at the location of objects at better than chance. It seems as if their visual systems are processing information at some lower levels without that information ever getting passed along to the higher levels of neuronal processing correlated with consciousness. But there also all sorts of visual tasks that blindsight patients are bad at, and you wouldn’t hire one to drive a bus. I think that this kind of case makes it actually more difficult to conceive of someone who can perform all of our cognitive and conative activities without any awareness at all.
I don’t myself think philosophical questions are analytic. And my sense is that Putnam was right to say that there can be both analytic and synthetic statements without it being the case that every statement must be one or the other.
Your question about alteration is hard, because if there has been some sort of change, one can always say that it’s no longer the same question. My impression is that that is a heavyweight philosophical question itself!
Though to Wittgenstein’s credit, he also pointed out that the distinction between factual propositions and “hinge propositions” (such as, for him, that no one has ever been to the Moon) is itself historically contingent.
That suggests to me that expressing a Wittgensteinian “hinge proposition” is not a good test for whether something is a philosophical statement.
Agree. Same with the atom for atom clone.
We already have “atom for atom” clones. they are called identical twins. Due to mutation and developmental divergence, they are not atom for atom, but they are as close as we can get.
As for “understanding” consciousness, if we build something that asserts convincingly its consciousness, we will accept it, eventually. We will not understand it, because it will have evolved, and we will never understand complex evolved systems in the same way and to the same degree that we understand designed systems.
That seems right. The inability to grasp that distinction has a lot to do with why intelligent design is so popular with engineers but not with biologists.
Trivial — at least for a mathematician.
In the Banach spaces $L^1$ and $L^\infty$ the unit circle is a square.
It doesn’t.
Interesting! I think that actually supports my earlier contention that conceivability is relative to background knowledge.
Kantian Naturalist,
I should perhaps say a bit more.
We define “square” and “circle” in terms of distance. It is the specific distance metrics in those Banach spaces that make the unit circle a square in each case.
This is related to why I have a different view about human cognition.
How we determine distance is entirely a matter of convention. We even call it a measuring convention.
Perhaps because I am a mathematician, I can that there is an enormously important role for conventions. And that’s why I am a conventionalist.
We could not have any facts at all, without first having conventions as a basis for coming up with those facts — measuring conventions are a well known example of this.
Does the earth go around the sun? Or does the sun go around the earth? It entirely depends on whether we are using the Copernican conventions or the Ptolemaic conventions.
Getting back to consciousness — people want to know how that comes from information processing. A computer is doing information processing. We, for the most part, are not. Of course, an accountant is doing information processing, but an artist isn’t.
What we are mainly doing is information manufacturing — or, perhaps more accurately, information crafting. Without conventions, there could not be information. The computers use information formed in accordance with our conventions. We, by contrast, invent conventions and we map the world into information by following the conventions that we have invented.
Dennett, in his “Bacteria to Bach” talks a lot about semantic information, which he sees all around us. He is badly mistaken. There isn’t information all around us (except for the advertising). Dennett needs to read his own book. There he will find a section about “user illusion”. That he sees information all around is part of his user illusion. The information that he is seeing is being manufactured by his cognitive system (chiefly by his perceptual system).
Conscious experience is not the experience of information processing; it is the experience of crafting information.
Oh, to connect with other threads here, Dembski’s “conservation of information” is complete bullshit.
keiths:
Neil:
I’m aware of your bizarre belief that brains don’t process information.
Consider my wooden ramp example from earlier in the thread:
If brains don’t process information, what exactly are they doing when we visualize a scenario like that?
Neil:
It isn’t being manufactured out of whole cloth. It’s constrained by sensory input, and those constraints are themselves a form of information. We learn about the outside world through sensory inputs. Information is transferred.
And why are our perceptual systems “manufacturing” information, if not to pass it along to our brains where it can be processed?
I think the problem revolves around the term information. It’s not entirely unlike the problem of information in genomes.
Does anyone think the information “processed” by brains can be quantified or translated into another medium?
keiths:
petrushka:
Then you do understand the question!
You don’t know whether I am experiencing qualia, because you don’t have first-person access to the contents of my consciousness.
If you’re like me, you believe it for other reasons.
And on similarity of construction. Humans’ nervous systems are extremely similar to each other, and humans behave similarly. Thus it seems likely that our consciousness is also similar.
An argument from similarity is far from being a solution to the hard problem, however. It tells us nothing about why the information processing that our brains perform is accompanied by first-person phenomenal experience.
petrushka:
Someone hands you a list of numbers and asks you to add them up and write down the answer. You do.
There was information in the list. It entered your brain via your visual system. It was processed by your brain, producing the sum of the numbers. Your brain translated that sum into a series of motor commands, causing you to write down the answer underneath the list.
How is that not information processing, and how is that not a translation of information from medium to medium?
Why is there air?
Why indeed.
I didn’t know science dealt in why questions, which is why I don’t think much of the hard problem. I don’t trouble myself with questions that can’t be answered, or even expressed in an answerable form.
This is also why one of the smartest people who ever lived came up with an operational definition of intelligence.
“Level two” is my feeble attempt at an operational definition of qualia, with all credit due to science fiction writers.
Keiths followed this with a three paragraph story in which the word “information” doesn’t even appear. And then he says:
That sure seems like a non sequitur.
petrushka:
It does, routinely:
Why does fertilizer make plants grow better?
Why do sufficiently massive stars become supernovae?
Why can’t I unbake a cake?
And yes, this one too:
Why are certain kinds of information processing accompanied by first-person phenomenal experience?
Neil:
That sure seems like a non-answer.
Care to try again?
Agreed. I think I was clear enough that we manufacture or craft information about the world. And that already indicates that it is not made out of whole cloth.
The primary purpose if information is to inform.
keiths:
walto:
Comte asserted his belief, too. You’re repeating his error.
And again: Comte’s position was a philosophical one. So is yours. Epistemology is philosophy.
Those are generally regarded as how questions. Why questions are about agency. Just a quibble about terminology. You can have your terms if we both understand them.
But if you are asking how are qualia produced, I think that is answerable, and though it is a hard question, it will be answered via emulation, and the definition will have to be operational, a form of Turing Test.
KN,
I have no difficulty conceiving of the possibility that a crowbar lacks phenomenal consciousness. Ditto for computers, even though they process information.
If information can be processed without an accompanying phenomenal consciousness, then why specifically should it be inconceivable for a system to process information in a human-like way without such a conscious accompaniment?
What does it mean to process information in a human-like way?
keiths, to KN:
petrushka:
Similarly to humans, of course. 🙂
How similar, you ask? As similar as necessary for you to claim that there must be first-person phenomenal consciousness present.
You grant that humans possess such a consciousness, and as far as I can tell you don’t think that computers do, despite the fact that they process information. Thus, there must be a transition, whether abrupt or gradual, from unconscious to conscious as we consider information processing that is more and more human-like.
The question is “What specific aspect(s) of human information processing make the difference, and how do those aspects bring about phenomenal consciousness?”
keiths:
petrushka:
keiths:
petrushka:
They can be rephrased as “how” questions, but so can mine…
…so your objection fails.
petrushka,
Such a test would merely establish whether the system reports qualia, not whether it experiences them — unless the ability to report them is invariably accompanied by phenomenal consciousness. But what justifies the latter assumption?
My earlier challenge still applies:
Neil,
I’m still interested in your answer to this: If brains don’t process information, as you claim, then what exactly is your brain doing when you visualize my bowling ball/wooden ramp/egg carton scenario?
Neil:
keiths:
Neil:
What you haven’t acknowledged is the rather obvious fact that information flows from the outside world to us via our perceptual apparatus. It isn’t a “user illusion”.
keiths:
Neil:
To inform what, if not our brains? Our livers?
That makes no sense. You seem to be using “process information” as some sort of magic incantation.
I’ll grant that it is part of your religion. But I don’t do presuppositional apologetics. Whatever your presuppositions are, I do not share them.
It’s not that I haven’t acknowledged that. Rather, I have explicitly denied that. It is nonsense. The “hard problem” arises from that nonsense.
Dennett spends a lot of time in his book arguing against Cartesianism. But he accepts what you are also insisting on here. And that’s the fundamental mistake behind Cartesianism.
It is also the mistake of Berkeley’s idealism. According to Berkeley, God is transmitting messages to us, resulting in perception. You just replace “God” with “nature” in your version.
I’ve already explained why this is incorrect twice. Should there be some sort of moderation intervention regarding your “dishonesty”?
keiths:
Neil:
It makes perfect sense. Your brain is doing a lot of work when you visualize my scenario, yet you claim that it isn’t processing information. Well, if it isn’t processing information, then what is it doing, according to you?
keiths:
Neil:
The information we receive from our senses reduces our uncertainty about what”s going on in the external world. It’s Shannon information.
There’s information in our sensory input, but that doesn’t mean that nature is “transmitting messages to us”.
Neil, you seem to be assuming that the idea of cognition as information processing relies on the idea that the information is already out there, lying around, waiting to be picked up by brains and processed. I agree that that would a problematic metaphysics, but is that what the metaphor commits us to?
I suppose I’m more inclined to see brains as mechanisms for converting Shannon information into semantic information. How they do is one of the deeper (though tractable, I think) problems in cognitive neuroscience.
Hi keiths and Neil,
A few years ago, I wrote an article on zombies for Uncommon Descent, which may interest you. It has a very in-depth discussion about the various kinds of zombies – and about duplicates as well. Here it is:
Zombies, duplicates, human beasts and consciousness (January 17, 2014).
Cheers.
I think you all are mostly missing Neil’s point. You are all assuming that information IS something, rather than a way of talking about phenomena.
keiths:
walto:
No, you haven’t. Epistemology is philosophy, whether you like it or not, and claims about what we will never know, like yours and Comte’s, are epistemological claims.
Information is an abstraction. Processing information is a bit like following a map. Useful, but the map is not the territory, and reading a map is not the same as being in the territory.
If a brain, biological or electronic, does what a brain does, it is a brain. It doesn’t matter whether it is a duplicate or an emulation. Every one of us is a duplicate of previous living things (with modifications). If we reach a point where the behavior of electronic brains is equivalent to biological brains, they will be equivalent in every aspect.
I don’t think that is possible with current technology, and I am skeptical it will happen. Not in the foreseeable future.
petrushka,
Neil isn’t denying the existence of information. He just claims that we “manufacture” it, never receiving it from the external world.
petrushka,
In other words, you’re a functionalist. I lean that way myself, but it doesn’t solve the hard problem.
Thanks, Vincent. I’ll take a look at your OP later today.