At Aeon, philosopher Philip Goff argues for panpsychism:
It’s a short essay that only takes a couple of minutes to read.
Goff’s argument is pretty weak, in my opinion, and it boils down to an appeal to Occam’s Razor:
I maintain that there is a powerful simplicity argument in favour of panpsychism…
In fact, the only thing we know about the intrinsic nature of matter is that some of it – the stuff in brains – involves experience… The theoretical imperative to form as simple and unified a view as is consistent with the data leads us quite straightforwardly in the direction of panpsychism.
…the brains of organisms are coloured in with experience. How to colour in the rest? The most elegant, simple, sensible option is to colour in the rest of the world with the same pen.
Panpsychism is crazy. But it is also highly likely to be true.
I think Goff is misapplying Occam’s Razor here, but I’ll save my detailed criticisms for the comment thread.
How is brains doing addition relevant to the hard question?
The potentially answerable question is, what happens if we successfully emulate the actual architecture of brains. Before we get to the philosophical question, we should observe the phenomenon we wish to understand.
Not at all relevant to the point.
What do you think the answer will be?
I’d say if/when we emulate the architecture it will tell us nothing at all about whether the emulation is a philosophical zombie or not.
peace
Hard to say. But then we probably won’t ever emulate it.
It seems possible that computers already have qualia. There’s no reason that the qualia should be apparent from the usual input/output channels. They could internally have qualia, but there’s no way for us to tell.
And then there’s the possibility that a system could have qualia, but not be conscious of having qualia. Maybe having qualia isn’t really the hallmark of consciousness.
Yes, that is why I am wondering why
“ I think it’s a step forward but needs to take randomness into account to be viable.”
one would need to take a non actual subjective variable into account to be viable explanation for something.
Exactly, from your perspective randomness does not exist therefore the actual randomness/ nonrandomness dichotomy does not exist.
It just seems strange you reject the actual existence of one of your variables that you believe needs to be taken into account to make your intent detector work.
peace
petrushka,
You and Neil are the ones making the bizarre claim — that brains don’t process information — so it’s up to you to tell us how your claim is relevant to the hard problem.
Instead, you’re avoiding my question:
keiths:
Neil:
Completely relevant to the point. The information contained in the list — the identity of the numbers to be added together — doesn’t just teleport into your awareness. It has to get there by physical means.
The light reflecting off the list carries that information into your visual system. Block the light, and you block the information.
No. I’m sorry but that is wrong. Read it again more carefully.
1) What is wrong with subjective observations? Subjectivity is what consciousness is all about
2) Who said anything about explaining something? If you could explain consciousness comprehensively you could duplicate it thus demonstrating that your explanation was incorrect.
Why is that strange?
Real randomness does not exist but apparent randomness is a necessary consequence of our lack of omniscience.
Separating information from apparent “random” noise is a hallmark of human consciousness. We are scary good at it.
peace
No. Comte was a positivist. Like you. That is, he believed that this info would never be available to science, and that, therefore it could not be known. You and I both realize that he was mistaken in his empirical prediction. But not being a positivist I don’t agree that that was a necessary prerequisite for knowledge. I just think it’s generally very important to have.
Dunno why you can’t make these distinctions. Are they too subtle for you?
Intuitively, that seems correct to me. There simply isn’t any empirical information that can answer that. As you’ve said, this is much like the “other minds” question.
The brain is a component of a behaving person. I don’t know exactly what the brain is doing when a person computes addition. I suspect if we did know, that AI would be further along, and the hard problem would look different.
The distinction I am trying to make is that we do know how electronic computers do addition. At least we know how existing designs work. We know how transistors work; we know how to configure arrays of transistors to make logic gates; we know how to connect logic gates to do arithmetic.
But we do not know how people do arithmetic.
And worse, as evolutionists, we know that brains evolved for behaviors that have little or nothing to do with counting and adding.
Of course its an other minds problem, and that’s why, since the word robot was first used in it’s current sense, science fiction writers have accepted the imitation game as the only game in town. Loosely speaking.
We judge the consciousness of other entities by how they behave.
Most proponents of “the singularity” assume that once a computer behaves sufficiently like a human we will grant that it is conscious and extend human rights to it.
They forget that our recent history shows that we are more than willing to deny actual humans human rights if we decide they are a little different from us.
A silicon based appliance has no chance whatsoever even if they’re named Hal or Data.
peace
If it’s the only game in town, most of us would just assume not play.
peace
fifth:
walto,
I’m not a positivist.
Just like you. He believed the question could never be answered empirically. You say the same thing:
His verdict was premature, and so is yours.
It might be worth noting that the word “robot” means “forced laborer”, or to be more blunt, “slave.” It’s a Czech word that was first used in its modern sense by the Czech writer Karol Capek in his play “Rossum’s Universal Robots.” It’s a modern take on the classic master/slave dialectic: the robots rebel and destroy their human masters. This was thirty years before Turing proposed the imitation game as a test of machine intelligence.
keiths:
petrushka:
Yet you’ve told us emphatically that the brain does not process information. Addition is information processing. Therefore, according to you, the brain does not perform addition in my scenario.
Hence my incredulous question:
petrushka:
So? We know that they do arithmetic, and doing arithmetic is a form of information processing. You’ve acknowledged that humans process information, but you’re denying that brains do. Where is the information processing taking place, then?
Where is the wrong queation. How is a better question.
When I was taking algebra, second year and higher, the processing took place on paper, and the answer was recorded on the paper before I became aware of it. Something like this happens with abacus users. The fingers do the calculation and the person reads the answer.
As I say, the problem with calling this information processing is that it is not a heplful metaphor. Designing machines that do logical operations faster does not lead in the direction of artifical intelligence. Brains can do logic, but the architecture is not anything like a locic processor.
I think the “forced” part is what is key here. Robots seem to be captive to their programing they simply have no choice do to anything else.
Humans on the other hand at least like to think that we have the ability to do otherwise than we do.
It’s that perceived freedom that sets persons apart from animatrons and robots. With all due respect to Turing behavior is really beside the point
peace
Perhaps a different metaphor. Information processing done by humans is done by a virtual machine running on a CPU that is not, at the lowest level of architecture, logic driven.
This distinction becomes important if you wish to build or discuss a simulacrum. Or wish to discuss what is meant by consciousness or personal experience.
It’s important to note that “not logic driven” is not the same thing as arbitrary or random.
peace.
petrushka,
No, “where” is the right question.
You told us that humans process information. You told us that brains don’t process information. The assertions are yours, and it’s up to you to support them. (This also applies to Neil, of course.)
To add up a series of numbers “in one’s head” is to process information. According to you, the addition does not take place in the brain. What justification can you offer for your confident assertion about the location of the information processing? If the brain doesn’t carry out the addition, then where does it take place? The bladder?
Your repeated evasions indicate that you don’t have a good answer. That’s not surprising, because your claim — that brains don’t process information — is clearly false. Why not just acknowledge that instead of continuing the evasion game?
You had a false belief about brains, and someone corrected your mistake. That’s a good thing, not something to be hidden.
Actually, no. Your brain was intimately involved. An application of chloroform would have prevented any algebra from getting done. The pencil and paper on their own were incapable of performing the processing.
In any case, I anticipated that objection. Remember, I specified in my scenario that you read the numbers off the list, add them in your head, and only then write down the result.
The addition is obviously taking place in your brain, not in your liver or your gluteus maximus.
I’m sure your geometric logic has defeated me, aside from the fact that you haven’t bothered to follow anything I’ve said. Nothing that is important.
petrushka,
Of course I’ve followed what you’ve said. That’s how I became aware of your odd claims regarding people, brains, and information processing.
Now, perhaps you can follow what I just said (and asked):
As Edward Feser is known to have pointed out (via Saul Kripke), people do actual addition, but computers or calculators don’t. Computers only do “quaddition” or quasi-addition, a pre-programmed operation. When the operation hits the limits of the memory or of the program, the computer simply errors out.
The computer doesn’t even understand the concept of addition, but people understand it. When the human hits the limits of his operative memory, he can use symbols for help, and when the operation hits the limits of one kind of particular symbols (say sticks or Roman numerals), he can use other kind of symbols (say Arabic numerals), and so on to a potential infinite – because humans understand the concept of addition, but computers don’t.
This tells something about cognitive science – there’s probably nothing much cognitive going on there. Similarly, the impact of cognitive science on linguistics has been minimal. Linguists half a century ago sincerely tried to gang up with behaviorists, cognitivists and evolutionists, but it turned out completely unproductive.
In “Physics of the World-Soul: The Relevance of Alfred North Whitehead’s Philosophy of Organism to Contemporary Scientific Cosmology” Matthew Segall writes:
Seagall has an interesting discussion on “Consciousness, Technology, and the Singularity”, from a panpsychist position here
IMO a paradigm shift is overdue.
Verbal behavior, which includes adding up numbers in your head, is a recently invented behavior. A billion years of evolution steered brains optimized for other things. The underlying architecture is optimized for parallel processing of situational responses to need and threat.
Until we began inventing autonomous robots ( cars, drones, etc), AI was pretty much locked into imitating human computers, and was focused on solving logical and arithmetic problems. Due to cultural history. Most people think of reason as the highest as best kind of brain function. We judge the intelligence of other animals by their ability to count and to solve problems resembling IQ tests.
But all that is recent invention in the universe of brains. It is a kludge added on to an architecture evolved for immediate response.
When your objective is to understand qualia, it would be best to understand the mosquito brain. Perhaps now that autonomous behavior is attracting big money, the architecture of computers will evolve to support it.
That seems a sensible suggestion to avoid wasted effort!
petrushka,
Of course. I picked addition for my scenario not because it is ancient, but because it is something that even you and Neil acknowledge as an example of information processing.
Brains do process information, and adding up numbers in your head is just one of the many forms of information processing that take place there.
And it seems to be lateralized in the human brain. McGilchrist perhaps overstates his case but I find the idea that our awareness of our awareness is limited by this lateralization fascinating.
petrushka:
Alan:
As if no one had ever bothered to observe their conscious experiences, and as if such observation weren’t involved in the formulation of the hard problem.
CharlieM, quoting Matthew Segall:
I would love to see Segall’s demonstration that physicalism precludes life and consciousness.
Suppose for a moment that we wish to construct a Turing Test device to detect and report the color green. The test will include all the kinds of stimuli that cause humans to report the color green.
Among the stimuli are pure lights emitting in a certain range of wavelengths.
Additive lights emitting in wavelengths outside the “green” portion of the spectrum.
Subtractive sources.
Flickering light sources that add or subtract colors.
Benham tops that induce color sensations by alternating black and white stripes.
Afterimages.
Illusions created by context and motion.
How do you deal with these qualia in a data processing model? How do you program your imitation game? Can you reliably program, using the data processing model, a device that will “pass” the test when confronted with some previously unnoticed kind of stimulus or illusion?
Qualia are a feature of a particular kind of brain architecture. The problem with the “data processing” model is not that it is wrong, but that it is misleading.
If “data processing” is all inclusive and incorporates all possible kinds of reflexive and reactive behavior, then it is like ID. It explains everything and explains nothing.
When you ask how a material system embodies qualia, you are asking a question about the architecture of the behaving system.
I suspect that if we develop an architecture that enables really efficient autonomous behavior, it will experience qualia. It will not have to compute green; it will experience green. So to speak.
And it will be susceptible to illusions.
There are no “pure lights” in this universe and no known official standard as to which range of wavelengths are to be considered “green”
peace
petrushka,
Setting aside the awkwardness of the metaphor, let me point out that the lowest level of every actual information processing system is physics, not logic. (Dualists may disagree, but I’m not addressing them here.)
petrushka,
You keep mentioning Turing tests, but as I’ve already pointed out:
I am not arguing against that. I am arguing against the concept that our current crop of computers are relevant to the question of qualia, or that our current architectures can emulate brains. I am fairly secure in this opinion because I chat with people who work at the DARPA level of IT, and they say the physical architecture problem is unsolved. AI is a dream waiting for someone to figure out how to implement it.
Hmm! If you mean “As if no one had ever bothered to observe their own conscious experiences…” then you miss the point I was making. Half the human brain appears invisible to self-reflection. Not a question of “bothering”, more a question of access.
petrushka,
The question is whether brains process information, not whether “our current crop of computers” experience qualia.
Surely by now you can see that brains do in fact process information. It’s time for you and Neil to let go of the silly idea that they don’t.
He doesn’t say that life and consciousness is precluded by existence. He is saying that according to materialism ultimate reality does not depend on these features and they are just emergent aspects which may come and go in the blink of an eye metaphorically speaking, In other words they are relative, not ultimate features.
Is your understanding about life and consciousness different from this materialistic view?.
Another way to look at the problem as I see it is to think of qualia as analogs, rather than as the product of sequential computing. The fact that neurons “fire” synapses leads us down a garden path. Everyone sees brains as wet digital computers. But the dominant mode is analog.
Stimuli in, response out. That is the earliest mode, and everything evolved since the earliest tropisms supports immediate or quick action. Pondering and computing mean death.
If you think of qualia as analogs rather than the product of computation, some of the mystery fades. How does a stimulus evoke the color blue? By shoving the dial. Not philosophically different from water eroding sand and rock.
The digital aspect of neurons slows down the response, but the “computation” is parallel rather than sequential. There are layers and layers of passing the baton. Animals survive predation not because the system is zippy fast, but because predators have the same constraints.
You seem intent on scoring points rather than understanding an orthogonal idea, so the conversation is unproductive.
Alan,
That’s confused. We’re talking about the observation of conscious experience, not of activities taking place outside of consciousness.
keiths:
petrushka:
The problem is that you and Neil are clinging to an idea that has been clearly shown to be incorrect. Set your ego aside and come to grips with what scientists long ago figured out: brains process information.
petrushka,
That doesn’t help. The hard problem in no way depends on seeing information processing as purely digital.
The question is how physical information processing, whether analog, digital, or a combination of both, gives rise to first-person phenomenal experience.
Apart from the obvious question of who is “We”, other participants don’t seem to be communicating with you at all successfully.
So Keiths is talking about “consciousness” from a first-person point-of-view, notwithstanding his inability to verbalize about the non-verbal half of his brain?
Are you suggesting that the non-verbal hemisphere is not “conscious”?
keiths:
CharlieM:
Not by existence. By physicalism, which he refers to as “scientific materialism”:
He’s wrong about that, of course. Hence my statement: