At Aeon, philosopher Philip Goff argues for panpsychism:
It’s a short essay that only takes a couple of minutes to read.
Goff’s argument is pretty weak, in my opinion, and it boils down to an appeal to Occam’s Razor:
I maintain that there is a powerful simplicity argument in favour of panpsychism…
In fact, the only thing we know about the intrinsic nature of matter is that some of it – the stuff in brains – involves experience… The theoretical imperative to form as simple and unified a view as is consistent with the data leads us quite straightforwardly in the direction of panpsychism.
…the brains of organisms are coloured in with experience. How to colour in the rest? The most elegant, simple, sensible option is to colour in the rest of the world with the same pen.
Panpsychism is crazy. But it is also highly likely to be true.
I think Goff is misapplying Occam’s Razor here, but I’ll save my detailed criticisms for the comment thread.
https://philpapers.org/rec/DREPEO
My position can be boiled down to few points:
1. We really don’t know what brains are doing with enough detail to emulate them, even though there appears to be no insurmountable technological barrier.
2. AI emulates mostly verbal behavior, which includes math, logic and verbal memory. We really haven’t emulated visual memory very successfully, as in pattern recognition. I believe Hofstadter quipped that we would be on the path to AI when a machine could recognize the letter “A”. The difficult of this task, and progress made, is reflected in CAPTCHA.
3. My assertion is that the difficulty with emulation is one of architecture. Digital processing, whether single thread or parallel, is sequential. If you Google “are brains digital or analog” you will find that brains are a hybrid. I don’t think this is a trivial point. I think this is a significant barrier to emulation.
4. Moving on the hard problem: Many people have written many words on what it means to be self aware or conscious. The ramifications of this problem have best been worked out in fiction. It is not a problem that you solve; it is one that you cope with.
On one hand who have fictional robots that assert their humanity and their rights. On another hand, we have real people who seem incapable of empathy.
And it is empathy or its lack that delineates ways of coping with the problem. Are other entities like ourselves? How do you know? How do you rank partial similarities and differences?
I take this route because I speculate that we will eventually master the imitation game. And if we do, we will confront the hard problem with knowledge that we currently lack. If we can emulate the behavior of people in a non-biological substrate, we will know what it takes to have personal experience and consciousness.
As for the woo question — how does mere matter give rise to self awareness — I would argue that we have miscast matter as billiard balls, and the “mereness” is an illusion.
Quick question: If it were possible to freeze frame a human brain and spend unlimited funds analyzing it’s state, would we be able to tell what it was representing?
If we do this with computers — make a core dump — we can tell exactly what it was doing.
Question that I think is related: If we did not have examples of phenotypes, could we read a sample of DNA and determine what it “means”?
No (in my opinion).
Not really. To know what it was doing, you have to also know about the source of its data and about some of the history leading up to that computation. A bit in memory doesn’t tell you what it is representing.
With the brain, the problem is far more difficult. For the computer, you can at least work out what are the bits. The design of that particular model is presumably available. With the brain, the problem is that no two brains are identical.
Is the argument here supposed to be that there’s nothing at all like information processing in brains because brains aren’t like computers in important respects? Or that brains aren’t representing their environments in any sense because they don’t have storage-and-retrieval mechanisms like digital computers do?
Here’s a very simple and I think clear sense in which representations are crucial to many kinds of cognition: a cognitive system will contain a representation of X if the system can guide the behavior of the organism with respect to X when tokens of X are not directly perceptually present to the organism. (I’m borrowing this from John Haugeland, in case anyone’s interested.)
This is sufficiently narrow that it’s clear why the term ‘representation’ is the right word (and not a mere metaphor or analogy) while still leaving open what kind of representations are involved. There’s been a lot of literature on the idea of cognitive representations as being map-like structures. This means that representations are icons rather than symbols: they are second-order resemblances, where the functional relations between neuronal assemblies are similar to — are maps of — the functional relations between the motivationally salient features of the environment.
Not quite.
The trouble with information talk, is that people don’t agree on what they mean by “information.” So we can try it with representation talk. Unfortunately, people disagree on what they mean by “representation”.
A computer receives representations in its input. And then it perhaps converts that to a different form of representation. That’s pretty much what “information processing” usually means.
A brain does not receive representations in its input. Just looking around does not give representations of the environment as input. The brain, instead, is in the business of constructing representations. And constructing representations is very different from processing representations.
That seems like a bit of sleight-of-hand, though: it amounts to saying that since “information processing” refers to “what digital computers do” (i.e. physically implement universal Turing machines), and since brains don’t do anything like that, then the term “information processing” shouldn’t be used to talk about brains.
But why not allow talk about “information processing” as a general category, and then stipulate that what brains do is a kind of information processing distinct from what computers do? After all, that’s what neuroscientists do!
I agree that brains are largely in the business of constructing representations rather than having representations handed to them, as with computers. But brains are also in the business of changing those representations in response to the activation of sense transducers, and there’s a lot of conceptual mileage to be gotten out of explaining the propagation of signals (i.e. modulations of spikes, changes in synaptic weights) in and across neuronal populations.
It might help if we clarified: is your objection more to “processing” or to “information”?
And do you suppose that constructing representations doesn’t involve processing information (at least information as physics typically considers it)? There’s visual processing occurring in the eye (arguably an extension of the brain) and also in the V1 section of the cortex prior to any visual representation.
More importantly, representations are hardly the point of brain activity. Learning and action are, and the various representations apparently exist in order to produce abstractions and understanding, as well as immediate action at times.
Glen Davidson
I wasn’t really speaking about meaning. A core dump gives you everything. The program and the data.
My point is that “data processing” is syntactical and sequential. Brain behavior is not. If you have the current state of a computer, you can step forward and predict the state after n clock cycles. Yes, you need to know the instruction set of the CPU, but that is limited.
With brains there is no instruction set and no way to predict future states. That is what I mean when I say brains behave rather than process information.
I think there are no tokens in brain behavior.
A token is a fungible representation. One that would have the same meaning or function in any “data processing” system. One can make computers out of relays, vacuum tubes, transistors, or Chinese soldiers. If the rules of logic are the same, all systems will process data with the same result.
Brains are reactive. They are evolved tropisms, with layers and layers of evolved inhibitors and exciters.
Data processing is simply a misleading metaphor. It implies tokens and syntax and grammar. It implies that a representation in a brain could somehow be transferred to another brain or an artificial brain.
There are a lot of people who believe that this can happen, and a lot of science fiction devoted to this concept, but it is bullshit.
petrushka,
Philosopher use “token” and “type” in a looser sense: when say “I’d like a beer” I mean “I’d like any token of the type ‘beer’, I don’t care which token it is”, but when I say, “could you hand me that beer?” I mean “please give me that specific token of the type ‘beer’.” Likewise, a wild cat stalking a bird is tracking that specific bird, that specific tokening of the type ‘bird’ as instantiated in the cat’s cognitive mapping of her environment.*
* Presumably the cat’s awareness of the bird consists of quailia.
keiths:
walto:
That’s an interesting paper, but not relevant to my addition scenario, which is about information processing, not qualia or meaning.
The information processing is taking place in the brain. The colloquialism “I added up the numbers in my head” is not just a figure of speech.
These are examples of verbal behavior Words are tokens, and language has syntax and grammar. Verbal behavior is something that people do, but talking, mathing and reasoning are not particularly good analogies or metaphors for brain behavior.
Someone centuries ago asserted that human reason is the crown of creation. Reasoning has gives us tools, agriculture machines, medicine and so forth. I cannot discount the utility of verbal communication and reasoning.
But talking and reasoning are things people do. They are not the thing that does it.
If you want to ask how a physical system that includes a brain has personal experience and self consciousness, you have to go a bit deeper than the outward visible behavior.
When Galileo and newton wanted to know how the planets moved, they looked at how things fall. They ignored the unanswerable questions of why there is gravity and what is gravity. They concentrated on understanding the behavior of gravity.
At the moment, computers process data because that pays the bills. They are very good at that. They are very good at any problem that can be tokenized and reasoned about.
They are not so good at driving. They are getting better, but they call attention to the difference between behaving and reasoning. The difference between seeing a bicycle and processing image pixels and computing probabilities.
The computational problem is intractable.
Meanwhile, the Evasion Twins — Neil and petrushka — are at it again.
Neil describes a comment as
It should be easy to point out the problems, then, but when challenged to do so, Neil tiptoes silently away.
Next, I chide petrushka for avoiding two issues that I’ve raised in response to his own adamant claims. He even quotes me on it…
…and then proceeds to sweep the issues under the rug, addressing them nowhere in his comment.
Why is it so hard for you guys to take responsibility for your own statements?
You could be right about everything, keiths, but I don’t care. I don’t care what you think, because nothing in your posts interests me, and nothing I say interests you.
petrushka,
Your claims do interest me, which is why I’m challenging some of them. Why not take responsibility for them, and either defend them, if you can, or withdraw them, if you can’t?
Your immaturity is a huge impediment to discussion. You should expect your views to be challenged at TSZ. Why whine about it when that happens?
Glen, to Neil:
Right. And consider the massive amount of information processing involved in transforming my verbal description of the bowling ball scenario — which after all is just a pattern of light on a computer screen — into a mental representation of the physical apparatus. Having constructed the representation, we then evolve it forward in time based on our knowledge of the world and of the laws of physics.
It’s beyond me how anyone can, with a straight face, deny that this amounts to information processing.
I confess to being a haphazard thinker and poster. Okay? I simply don’t care much about being right. I do try to be right, but it isn’t a big emotional thing.
I downplay philosophy because I’m not good at it, and I don’t try to become good at it because philosophical questions strike me as unanswerable. When I encounter a question I am interested in, I try to find a piece of it that can be cast in an answerable form.
Hence:
The question of god’s existence becomes one of evidence or one of defining god operationally, that is in terms that physics can address.
The question of free will becomes a question of whether societies benefit from holding people accountable. Whether the general welfare can be improved by incentives and disincentives.
And the question of how matter gives rise to consciousness, becomes, can we emulate animals that have brains. I would argue that emulation is a necessary, if insufficient step toward answering the question.
I have been watching the progress of AI for 55 years. I first encountered the concept in a discussion of how many vacuum tubes it would take to have as many tubes as there are neurons in a human brain.
Since then we have had a lot of successes and a lot of disappointments.
I am not interested in whether brains process information. I consider that an unproductive question.
I am interested in what brains do. How they behave, how they work, as a question of how to emulate them.
Keiths, it this interests you, tell me your thoughts on the subject.
I am not interested in winning or losing an argument. I’d like to see someone put forward theories or conjectures, or to report on what’s in the current literature.
petrushka,
The term we’ve been discussing is “information processing”, not “data processing”.
And no, “information processing” implies neither tokens/syntax/grammar nor the transferability of representations from one brain to another.
Regarding your claim that “information processing” is a misleading term: You might be confused by the notion of brains processing information, but the experts aren’t. Why not broaden your horizons and do some reading on the topic?
I have posted at great length about why I think information processing is an unproductive metaphor. Boiled down, it leads to dead ends in the imitation game.
petrushka,
I don’t believe you. If it weren’t a “big emotional thing” for you, you wouldn’t avoid the issues I’ve raised with your claims, and you wouldn’t explode when I called you on it.
Remember, you’re the guy who once described admitting mistakes as being tantamount to “groveling”. Ego is a very big deal for you, petrushka.
petrushka,
It doesn’t. The experts understand that “information processing” is not limited in the way you imagine it to be.
The problem is with your understanding of the concept, not with the concept itself.
Okay, I’m fine with that. I am wrong to limit my understanding of the term and the concept.
But I would like to move on to the question of whether projects like Deep Blue are on a path that could lead to AI.
If information processing is a concept broad enough to encompass what brains are doing, what is it that brains are doing, and how are they doing it?
KN, to Neil:
It’s striking that both Neil and petrushka reject the term without even understanding how the experts use it.
petrushka,
The hot area of AI these days is “deep learning” based on neural networks. Not networks of real neurons, of course, but of abstract neurons that can be implemented either in hardware or software.
So the field of AI has very much taken a cue from nature.
petrushka,
Right, and that’s why I was skeptical of your suggestion of a Turing test for qualia:
At this point in the discussion I’m not even sure that I have qualia. How would I know? Maybe all y’all have some weird little phenomenal property that I don’t have!
Neural networks aren’t. Not yet.
https://www.quora.com/What-are-the-main-criticism-and-limitations-of-deep-learning
One of the criticisms of deep neural networks is they are easily fooled.
Perhaps we are onto something, because brains experience illusions. But brains have hundreds of millions of years of evolution weeding out fatal misperceptions.
I’m more concerned about the cost of neural networks. Hardware costs and energy costs. That and scalability.
Not really a sleight of hand. If anything, I am wanting to be strict enough about the meaning of “information”, to avoid sleight of hand.
I just urinated. I guess that was information processing. I’ll soon be eating, and I guess that will be information processing.
If we make “information processing” too general, it becomes useless.
(1) I take a ruler, and adjust the calibration marks of that ruler so that they better suit my purposes;
(2) I line the ruler up against some object;
(3) I read of a length of 3 units;
(4) I convert that length to centimeters.
Given those steps, I would count only step 4 as information processing. What you are calling “conceptual mileage”, is what I see as roughly analogous the steps 1, 2 and 3. And I’ll agree that they are important. They are involved in information construction, but they are not information processing.
What physics considers to be information does not seem very useful when studying human cognition. Note that I am not criticizing physics.
petrushka,
I mostly agree with that.
petrushka,
Neural networks aren’t what?
Neil:
That’s an argument for keeping Neil away from sharp objects, not an argument against the idea that brains process information.
KN,
I think it’s time we broke the news to you.
For the rest of us, commenting at TSZ feels like one continuous orgasm. We’ve been hiding that from you because we don’t want you to feel left out or deprived.
Sorry about that.
I’m not sure exactly what it is that you are agreeing with, but perhaps you agree with the scalability problem.
When I argued against “information processing” I intended to argue that sequential logic does not emulate what brains do. I’ve heard people say that any analog process can be emulated by digital. Film replaced by digital images. Records replaced by digital audio.
I’m not convinced this can be scaled to the behavior of brains. We are obviously trying, but I think a hybrid architecture would be more likely to succeed.
In one of the Rama novels, Arthur Clarke discusses beings capable of digitally emulating humans. He even discusses the number of digits of precision required, and the fact that a digital parallel entity would drift away from the original. But the digital copy would not notice or care. Clarke explored this concept in several novels.
He was clearly of the opinion that if a system behaves like a conscious being at a fine enough level of detail, it is conscious. No zombies.
I would say that mastering the imitation game would change the nature of the hard problem. The spirits that inhabit trees and rocks and clouds become processes that can be understood.
That would explain why no one here wants to stop.
That’s a deeply held religious belief, part of AItheism.
It is, of course, true that there can information processes can be emulated. There seems to be a new dualism afoot. Or perhaps it should be described as a new idealism:
I’m disagreeing with that view of information. As far as I’m concerned, information is a human artifact. I’ll grant that other organisms can reasonably be said to also use information, so I suppose I should say that it is an organismal artifact.
For me it’s more like a toothache. Unpleasant, but I can’t stop pushing the tooth with my tongue anyhow.
I would say that what we call matter is a process that we do not and probably will not ever, understand completely. We can, if we choose, build comfortable verandas, sip our juleps, and contemplate sunrises and sunsets.
And talk till we are blue about the ineffable. It’s hard questions all the way down.
Segall is not saying that he believes atoms and such to be alive just because the organisms we see around us are alive. He goes into a lot of detail about in what way they can be considered to be alive. He provides plenty of videos and articles where he explains his point of view.
In a video in which he comments about a video by Professor Corey Anton, he had this to say:
In the previous video he refers to a book, The Reflexive Universe by Arthur Young where much the same views are expressed.
In a comment about a another video by Professor Anton he says:
The materialists who treat the universe as a mechanism are the ones who are ignoring the obvious.
That’s all irrelevant. The argument essentially requires on the fallacy of division whether he thinks atoms are alive or not.
Notice that you begin both of these comments with, “I think”. And this is where you must start if you wish to form any theory of knowledge.
Who are “materialists,” and why are they ignoring the obvious?
Because Segall writes walls of texts that never get past the fallacy of division?
Glen Davidson
Sure, but the “I think” there is purely formal — or, if you wish, grammatical. It’s not a commitment to any substantive theory about the nature of the self. Kant makes this point with painstaking clarity (and mind-numbing prose) in the Paralogisms section of the Critique of Pure Reason.
CharlieM,
Nowhere in that mass of verbiage do I find anything responsive to my simple request:
I’m not asking for a dissertation on Segall’s philosophy. I want to see his demonstration, if one exists, that physicalism (or “scientific materialism”, in his words) precludes life and consciousness. How, specifically, do the assumptions of physicalism lead to the conclusion that life and consciousness are impossible?
Is there such a demonstration, or is this another case of “Me like. Me believe!”?
keiths,
Howdy,
Found my way here to The Skeptical Zone along a rainbow hyperlink highway that appeared in my blog analytics earlier today. I like what you’ve done with the place. I am not sure what the context was for asking for my “demonstration that physicalism precludes life and consciousness.” Perhaps part of the context is a video of a lecture, “Evolutionary Panpsychism v. Eliminative Materialism?,” that I shared on Twitter in response to Philip Goff’s Aeon article on panpsychism and fine-tuning? https://youtu.be/Fyyu3BW7LCM
I don’t know that I can demonstrate this thesis for you. But I can try to perform its meaning and relevance for you, which could possibly illicit some new way of thinking about the problem of life or the mystery of consciousness that you haven’t considered before.
Physicalism is the idea that the universe is fundamentally composed of entirely blind, deaf, dumb–DEAD–particles in purposeless motion through empty space. For some reason, these dumb particles follow the orders of a system of eternal mathematical laws that, for some reason, the human mind, itself made of nothing more than dumb particles, is capable of comprehending. If you accept this definition of physicalism and the project of natural science, and if you avoid the question of the transcendental conditions for physics, then a coherent non-dualistic physicalist ontology requires that what we call “life” and “consciousness” both be explained away as mere appearances reducible to the mechanical collisions of particles. On this definition of physicalism, “life” and “consciousness” are just words we have for epiphenomenal illusions with no causal influence on what happens. “Life” is a genetic algorithm and “consciousness” is a meme machine, in Dawkins’ and Dennett’s terms. We are undead zombies, not living persons, on this reading of physicalism.
On the other hand, if you see consciousness and life as realities that are impossible to deny and that are in need of explanation *on their own terms*, either as emergent holistic processes with downward causative influence or as intrinsic capacities of phusis itself (my view), then clearly modern physicalism (or what Whitehead calls “scientific materialism”) must be mistaken. A panpsychist physics would be more adequate in the face of the realities of consciousness and life. If consciousness and life are not mere illusions with no hand in what happens but active participants shaping the evolutionary journey of the universe, then “physical stuff” like molecules and atoms, stars and galaxies, is not at all what the modern mind has been imagining for several centuries. Matter is not a heap of extensional lumps floating in homogeneous reversible time. That idea of matter has always been an idealistic abstraction. Concrete actually existing matter is infinite energy caught in a creative process of spatiotemporal evolution. This energetic expression is experiential through and through, and our special human form of conscious experience is just one of the universe’s many forms of spatiotemporal aesthesis.
keiths,
I told you. Assuming what Charlie gave us somewhere above is correct, the argument is a reductio that includes the fallacy of division. That’s it.
I was not talking about materialists in general. If you read what I wrote, I said the materialists who treat the universe as a mechanism.
He did not write any of the text in the post you are replying to. You will see that he is speaking in the links I provided. It was me who copied what he said into readable text.
And you too jump on the fallacy of division bandwagon 🙂
I did not say that thinking is a commitment to any theory. I was getting at the fact that thinking has to be your starting point before you are able to make any meaningful commitment whatsoever.