At Aeon, philosopher Philip Goff argues for panpsychism:
It’s a short essay that only takes a couple of minutes to read.
Goff’s argument is pretty weak, in my opinion, and it boils down to an appeal to Occam’s Razor:
I maintain that there is a powerful simplicity argument in favour of panpsychism…
In fact, the only thing we know about the intrinsic nature of matter is that some of it – the stuff in brains – involves experience… The theoretical imperative to form as simple and unified a view as is consistent with the data leads us quite straightforwardly in the direction of panpsychism.
…the brains of organisms are coloured in with experience. How to colour in the rest? The most elegant, simple, sensible option is to colour in the rest of the world with the same pen.
Panpsychism is crazy. But it is also highly likely to be true.
I think Goff is misapplying Occam’s Razor here, but I’ll save my detailed criticisms for the comment thread.
Exactly right. Good post. Charlie should spend a little time reading people other than rank cranks.
Firstly, it is not my thinking. I am relaying Segall’s understanding of Whitehead’s thinking.
And secondly the example you give above may very well be a fallacy of division, but it bears no resemblance to Whitehead’s thinking. He explains in detail his reasoning as to the way he understands certain entities to be organisms in their own right and it does not entail, “we see living organisms all around us and therefore subatomic particles are organisms”.
Here are a couple of passages from Segall which you may want to read and hear in context to get an idea of what he has to say about Whitehead.
In Retrieving Realism: A Whiteheadian Wager Segall writes:
In Romantic Science in Schelling and Whitehead Segall says:
What he is saying is that the materialistic proposition of mind emerging from matter is incoherent.
In Physics of the World-Soul: The Relevance of Alfred North Whitehead’s Philosophy of Organism to Contemporary Scientific Cosmology Segall writes:
So you are correct in that he believes Whitehead’s philosophy to be closer to reality than philosophy based on scientific materialism.
It bears perfect resemblance to what you posted above, which was a classic example. I may even use it in a classroom some day.
CharlieM,
IOW, Seagall doesn’t like science, and has nothing worthwhile with which to replace it. Oh, and he can bullshit with approved jargon like the best of them.
Glen Davidson
I would prefer to say that brains implement the response we call experience or qualia. I don’t think “gives rise to” has any useful meaning.
The current implementations of parallel processing are rather limited in application. I don’t see that they are on the path to AI. Where that path lies, I don’t know, and neither does anyone else.
Emergence is a useful term when applied to the behavior of systems. I don’t see that it explains anything. It is just an observed phenomenon. It isn’t explanatory in the sense of being part of a recipe. It’s just an after-the-fact oh-wow response.
Look, you can call anything anything you want, but you aren’t explaining anything and aren’t pointing the way toward progress in understanding mental phenomena.
What I am trying to point out is that there is a well known and observable difference in the architecture of brains when compared to digital computers. I have cited a well known experiment in which a digital computing chip “hijacked” an analog effect in the behavior of a particular chip and was able to perform a task in an unexpected way. I think this is similar to the way brains work.
Brains do not store representations. They respond. I hesitate to say “holistically” because that word conjures up bad medicine and other quackery, but they do. Brains are able to respond to objects like bicycles in ways that are difficult to emulate in electronic computers. Neurons are simply not fast enough to compute the points or pixels involved in discriminating a bicycle from say, a picture of a bicycle. I don’t think there are enough particles in the universe to emulate a human brain using conventional architectures.
The upshot of my assertion is that the question of how atoms in brains give rise to personal experience and qualia is that these phenomena are the behavior of brains, and the behavior is extremely dependent on an architecture that we don’t fully understand and cannot yet emulate.
Does “behavior of brains” explain anything?
No, but it would be a useful term if we find a way to emulate that behavior.
Let me try another approach. Consider an analog computer that “computes” trig functions. A slide rule can implement this, but so can an electronic device.
I would prefer not to say that the device processes information or computes. There may be some sense in which it does, but I would prefer to say that an electronic analog computer responds. It embodies the relation between input and output. There is a lag time, but the output tracks the input without anything resembling data processing or computation.
Now let us consider the possibility of a digitally configurable analog computer. One which many configurations can be stored and recalled. I would argue that there is no representation of the input stimulus or the output response. I may just not be thinking about “representation” correctly, but what I mean is that responses are not stored in bit images.
My metaphor for what brains do when responding to stimuli is sympathetic resonance. Neural circuits are triggered by resonance with stimuli. I don’t think it is useful to call this data processing, because I think the analogy with what electronic computers do is unhelpful.
One consequence of this metaphor is it implies there is no physical difference between learning that is inherited via biological evolution, and learning that is acquired via experience.
petrushka,
You’ve been all over the map in this thread, evincing deep confusion about representation, information processing, and the hard problem. Now you’re even arguing that parallel processing isn’t information processing!
Your personal confusion doesn’t mean that there’s something wrong with the concepts that cognitive scientists and neuroscientists employ. It’s just a sign that you’re confused.
CharlieM,
I would like to see him support that claim. Hence my statement:
Nothing of Segall’s that you have quoted actually attempts such a demonstration. It’s just assertion.
keiths,
He supports it by using a reductio that depends on the fallacy of division. That’s it.
No, it comes from thinking about the relationships between mind and matter, perceiving and forming concepts.
I would like to think that you agree, epistemoloically speaking, the conscious mind is primal. We arrive at concepts such as “subject” and “object” through thinking. We are taught to think of ourselves as somehow apart from nature observing external objects, that reality consists of entities in motion and qualia are just subjective representations lodged somewhere inside of our brains. But this leads to the thought that even our brains are just objects among other objects, in other words thought of in terms of qualia. This is what Barfield terms “onlooker consciousness” This is just a way of looking at things from a relatively recent Western civilization position. It was not always so and it will in all probability change in the future.
As he writes in The Rediscovery of Meaning and Other Essays:
According to Barfield the future will belong to “final participation”
IMO the modern Western outlook is a necessary stage to free humanity from dependence on external authority whether it be religion, science or whatever. But with freedom comes the feeling of stepping into an abyss where consciousness and self become illusions produced by matter in motion. To reach final participation it is up to each individual to take the plunge and move forward. So far I’ve only been able to dip the tipof my toe.
Richard Carrier has had a glimpse. In his book,
Sense and Goodness Without God: A Defense of Metaphysical Naturalism tells of how he had “powerful mystical visions” and:
Goethe in his poem “Nature” says: “Each thing she makes has its own being, each of her manifestations is an isolated idea, and yet they are all one.” His “gentle empiricism” is an attempt at final participation.
People may think it as just an illusion or a figure of speech when others talk about expanding the mind or being at one with the cosmos, but in so doing they are denying the experience of others.
For what it’s worth I think that is pretty much all wrong.
I don’t think that the conscious mind is even epistemologically basic — if it were, why did it take so long for a philosopher like Descartes to come along and say so? What is epistemologically basic, if indeed anything is, is our embodied being in the world who encounter a variety of things that are friendly or dangerous, able to satisfy our needs or thwart them, useful or useless, and (in our social environments) opportunities and obstacles for cooperation or competition.
Except in very rare cases, we don’t invent concepts whole-cloth through reflection: our experience of the world, like the experience of many animals, is itself conceptually structured. What distinguishes us may be our capacity to become aware of concepts as concepts, and surely language plays a crucial role here.
For that matter, the subject/object distinction emerges among specific philosophers to solve specific problems that have complex causal origins. There’s nothing like it in Buddhist philosophy, or Aztec philosophy, or (for that matter) in ancient Greek philosophy. It congeals in the historical arc from Descartes to Kant as they are trying to reconcile Christian ethics, bourgeois capitalism, liberal democracy, and mechanistic physics.
There’s nothing deep, essential, or necessary to this distinction as part of “the evolution of human consciousness”, which is really just a transposition of Neoplatonic mysticism onto Western intellectual history.
If people are just expressing how their experience seems to them, that’s one thing; if they are making assertions about how things are, that’s quite another.
Barfield and Goethe.
Oy.
Goethe is a very interesting philosophical poet and philosophical scientist. He’s not to be dismissed. There’s a lot of really interesting philosophy of biology and philosophy of science going on there. He’s not rigorous or systematic, but those aren’t the only intellectual virtues. Goethe’s influence on later German philosophers such as Hegel, Nietzsche, and Husserl is vast and complicated.
That stuff is not exactly my cuppa (except Fechner), but OK.
Did read Werther though. (As Trump would say, SAD!)
CharlieM,
Odd that I ask for this…
…and you ply me with quotes from Owen Barfield and (of all people) Richard Carrier, neither of which demonstrates Segall’s point.
So Segall never defends his own claim?
Then I think you need to increase your dosage of “modern Western outlook”. You’re still far too dependent on Rudolf Steiner and the argumentum ex rectum.
Kantian Naturalist,
Sorry KN, I don’t think I was very clear in what I was saying. What I meant was that in order to carry out any epistemological activity we begin by thinking and in that sense the conscious mind is primal. Do you agree with that?
keiths,
Hi keiths, thanks for your challenging thoughts. It will take me more time than I have at the moment to give a decent response. I will probably be too busy for the next day or two, but I will reply.
If you would like to get a better idea of Segall’s views on materialism and panpsychism you could always watch the video Consciousness Beyond Materialism
Mostly nonsense, with a “new-age religion” flavor — In my opinion.
We don’t need a better idea of his views, we need a good reason to consider them in the first place.
So far we just have a lot of appeal to authority and a false dichotomy between “matter” and “mind.” The latter is hardly correct, as the “mind” is conceived of as having evolved. There is a sort of Cartesian assumption in practice, but certainly not in theory.
Glen Davidson
No, I do not. I don’t think that thinking is an activity of the conscious mind alone. (I see that as the hangover of bad Cartesianism.) I think that thinking is itself an essentially social and linguistic activity, though one that we can be aware of.
And I think that the ‘starting point’ of epistemological reflection is experiences of ‘break-down’ in our conceptual grip on the world as we experience it. What ‘triggers’ an epistemological turn in our thought is when our familiar way of coping with the world –which are themselves conceptually structured — fail to work adequately.
There are many way in which this can happen, but in the history of “Western” philosophy this happens when people become aware of the need to resolve or avoid conflict that arises when different conceptual schemes come into contact.
One can think here about how early Greek metaphysics was inspired by awareness of a multiplicity of cultures and myths, or of how Descartes’s epistemological turn in the Meditations took place in response to both religious conflict (e.g. the Thirty Years War) and conflict between the authority of experiment and the authority of the Church (e.g. Galileo, Bruno).
So while conceptually structured conscious mental activity is (of course!) involved in epistemology, I don’t know if it helps to say that epistemology begins with that, except in a way that begs all of the questions.
CharlieM:
Again, I just want to see Segall — or anyone — demonstrate his claim that physicalism leaves “no room” for life or consciousness.
keiths:
petrushka:
None of which addresses the question:
You’ve noted that brains do parallel processing. Given that, why insist — bizarrely — that they don’t process information?
petrushka:
I refer you again to my wooden ramp example:
You represent that scenario “in your mind’s eye” and work out what will happen. Your mental model — of the ramp, the ball, the gymnasium floor, the eggs, and the laws of physics — is a representation.
How about another bowling ball scenario.
Place a person in a chair that has a headrest. Hang the bowling ball from a high ceiling by a cord that allows the ball to just reach the person’s face when stretched taut.
Now, bring the ball to the person’s face so that it touches their nose when they have their head firmly planted in the headrest, then release it. Instruct the person that they are not to move under any circumstance.
What happens? Where is the person’s head after several seconds?
Another scenario:
It’s a high fly ball to center field. Oh my! The center fielder has stumbled, but the right fielder is covering. Looks like an easy out.
Now, describe how a robot right fielder might calculate the trajectory of the ball and the need for action, and how a human would. Be specific about the algorithms involved.
Depart, for a moment, from the abstraction, the map, and talk about what is physically going on. Describe the architecture of the systems, their similarities and their differences.
I should mention that some group recently developed a robot that can learn to catch. Not programmed to catch, but designed to learn.
This is a mosquito level skill. And it is cutting edge.
You are free to engage in an actual discussion here. That would involve departing from your usual pit bull jaw stance and having some back and forth.
Like, instead of picking on one cherry from my discussion, one that I don’t even care about, you might tell us how you think brains integrate masses of “data” into integrated responses. Describe the chain of computer-like brain behavior that does the integration. Explain how a 100 Hz clock rate is compatible with the performance of people and animals, and why the clock rate of supercomputers is just barely able to match humans in games of pure logic.
petrushka,
You’ll enjoy TSZ more if you learn to expect your ideas to be challenged here, instead of bristling when that happens. Your claims are fair game for criticism, just like everyone else’s.
And you’ve made some bizarre ones, including the claim that brains don’t process information. Which you later undermined, as I pointed out:
Furthermore, you’ve suggested that parallel processing is essential for phenomenal experience:
Parallel processing is information processing. If phenomenal experience depends on parallel processing, as you suggest, then it depends on information processing.
And if brains don’t process information, as you claim, then brains don’t experience, by your own logic.
Your position is incoherent, so of course I’m going to point that out.
You’ve also claimed that brains don’t operate on representations:
I challenged you with the wooden ramp scenario, writing:
Instead of arguing for why that mental model doesn’t constitute a representation, you merely presented another bowling ball scenario.
But your scenario just proves my point yet again. By reading your description, I was able to model your scenario in my mind and predict what would happen. It was a representation, and a dynamic one.
petrushka,
Regarding your outfielder scenario, you write:
Why, when none of the arguments I am making depend on those things?
Ditto for your other demand:
How is that a response to anything I’ve argued?
In a way, it’s just semantics whether or not brains process information.
On the other hand, though, the scientific meaning of information has certainly shifted toward considering the world as being full of information that we simply take in and process. That’s because physically there’s no especial difference between information in photons and information in nerve impulses and there’s also no categorical difference between information in photons and information in computer circuits. Humans, and computers with sensors, sample information from the world and translate it into forms that they can process readily.
Of course the brain also happens to process information rather differently than digital computers do, but again it’s hard to see what categorically distinguishes what computers do with information and what humans do with it.
It’s not impossible to use different words for raw data and resulting information, of course, and that happens to fit with one definition of information. It’s just rather unwieldy and artificial to label the information (or “data”) in a photon differently than information encoded in the optic nerve, and it’s as artificial and unwieldy to label information in an optic nerve differently than information from camera pixels in computer circuits.
It’s an impediment to communication to use different terms for information depending on its form either in the environment or in information processing systems (brains or computers), because there’s nothing in physics that suggests that there’s any categorical difference between information in a photon and information in binary code in a computer. It’s all just a matter of causal transformations and, crucially, one can “reverse the process” and turn the computer information back into photonic information. That’s how fiber optics work, after all, as well as computer screens.
It’s semantics, but the semantics are important enough in this case, since information is understood as a very fungible commodity in physics and in information processing at the present time. We can’t really justify calling information impinging on the eye by a term different from the information travelling from the eye to the brain, let alone justify calling the information in the nerves something different from the information in computer circuits (except by modifiers, like “neural information” and “digital information,” although these are not exclusive terms), mainly because they’re all interchangeable anyway.
Glen Davidson
There’s the hard problem in a nutshell.
Start with a completely bogus idea about information (as that quote does). And that leads directly to the hard problem.
You might as well ask “Why don’t photons have conscious experience?”
Keiths, I have no idea what you are arguing.
Information as applied to what brains do, is a metaphor, and not a fact.
The metaphor breaks down when you try to emulate brains. This is not a trivial error, and it isn’t correctable by faster computers and better software. It is a conceptual error.
When you misunderstand what brains are doing, the hard problem appears.
This is analogous to the conception of DNA as a blueprint. The analogous hard problem is understanding what is happening during growth and development.
I can only try to suggest the nature of the error by referring to the behavior of cellular automata. Some rules produce trivial repeating patterns, and some produce unpredictable patterns. I would argue that the chemistry of life is unpredictable. You cannot find a shortcut that predicts the result of changes to DNA. There is no grammar and syntax that orders statements in DNA. You cannot translate DNA into a logical emulation and make useful predictions about “design” improvements.
Something like that is true of brains. Unless there is a breakthrough in architecture of computers, we cannot emulate brains. We can do many useful things that human thinkers do, but we cannot emulate being a person. Because there is no grammar or syntax to what brains do. What brains do is more nearly analogous to what a developing embryo does.
You might as well have said nothing, Neil.
Can you do anything but declare your a priori beliefs as fact?
Glen Davidson
petrushka,
I think you do. My comments above are neither cryptic nor vague. Parallel processing is a form of information processing, and mental modeling is a form of representation. Those are huge problems for your position. Why not address them instead of trying to sweep them under the rug?
petrushka,
No. Brains really do process information, and this is obvious. You and Neil have both agreed that people process information. If people aren’t using their brains to process information, then where does the processing take place? In their gall bladders? The two of you have been dodging this question for most of the thread.
petrushka,
It isn’t a metaphor, and you haven’t identified any such breakdown.
Even if it were actually, somehow true that brains don’t process information, that wouldn’t make the hard problem go away.
Your clunky suggestion was that we should refer to brains as “behaving”, not processing information. Well, why are certain kinds of physical “behavior” accompanied by first-person phenomenal consciousness, when other kinds — such as the behavior of a vacuum cleaner — are not?
It’s still the hard problem. You’ve just replaced “information processing” with “behavior”.
petrushka,
The hard problem doesn’t ask whether we can emulate brains. It asks why certain kinds of information processing — or “brain behavior”, to use your awkward substitute — are accompanied by phenomenal consciousness, when others are not.
Neil,
No. The hard problem remains even if you foolishly deny that brains process information. See my reply to petrushka above.
That obviously does not follow from anything that Glen wrote.
Yes, people are using their brains to process information. It does not follow that brains are processing information.
Neil:
That’s as goofy as saying
Hearts pump blood, and brains process information. My addition scenario makes your mistake obvious:
That’s your starting presupposition. I do not make that presupposition.
Neil,
It’s not a presupposition. It’s a conclusion based on evidence and reason, including the following:
Where did the addition take place, if not in your brain? On a street corner in Poughkeepsie?
Appallingly bad reasoning.
Neil,
Let’s hear your rebuttal, then.