The (non)existence of an immaterial soul or mind has been a longtime philosophical interest of mine. I’ve done several OPs on the subject at TSZ, so when I ran across the book The Substance of Consciousness — A Comprehensive Defense of Contemporary Substance Dualism, I knew I’d want to take a closer look.
The authors are Brandon Rickabaugh and J.P. Moreland. Rickabaugh is unfamiliar to me. He’s a self-described “public philosopher” and a former professor of philosophy at Palm Beach Atlantic University. Moreland is someone whose views I’ve criticized in past threads. He’s currently a professor of philosophy at Biola University in southern California (formerly known as the Bible Institute of Los Angeles), an evangelical institution.
Substance dualism is the view that humans consist of two distinct “substances”: matter, which is physical, and the mind or soul, which is nonphysical. Many religious belief systems including Christianity depend on substance dualism as a way to explain how an afterlife is possible. As a professor at an evangelical institution, Moreland is naturally drawn to the topic.
The book is over 400 pages long and covers a lot of ground, so I’ll have to read it in bits and pieces as time permits. I figured I’d start a thread on it here at TSZ to record my thoughts as I work through it and to discuss it with anyone who’s interested. The topic is relevant to our recent conversations about whether AI is truly intelligent, since at least one commenter here believes that true intelligence depends on a nonphysical component of some kind and is therefore permanently out of reach for machines.
Just finished Chapter 1. It’s a survey of where substance dualism currently stands (it’s a minority view, but according to the authors it’s resurgent), a litany of complaints about the dismissal of substance dualism by the majority of philosophers, and a 10,000 foot view of the authors’ plan of attack. I’ll save my comments on the latter until I’ve seen the details of their arguments.
The chapter also presents a definition of the sort of substance dualism they will be defending. They call it “Mere Substance Dualism”, which I take to be a nod to CS Lewis’s “Mere Christianity”, where “mere” means that it specifies only the essentials and thus is the broadest definition possible:
Mere Substance Dualism (SD):
That jibes with my own understanding of substance dualism.
The problem is akin to divide by zero.
Those who find it interesting seem to be defending religion or opposing religion.
I find both positions uninteresting.
petrushka:
How so?
Its implications go far beyond religion, and in any case, the question is orthogonal to the theism vs atheism debate. A theist can believe in the soul, or not; an atheist can believe in the soul, or not. The truth of particular worldviews can depend on the answer to the question: if the soul doesn’t exist, then most versions of Christianity are false. If the soul does exist, then physicalism is false. That doesn’t mean that theism in general requires the soul or that atheism in general requires its nonexistence.
To each his own. I find the questions fascinating: Is there life after death? How do humans think? Is AI true intelligence? Does modern scientific knowledge disprove the soul’s existence? Why does dualism seem so intuitive? Does consciousness depend on something nonphysical?
Your mileage may vary.
Some comments on Rickabaugh and Moreland’s definition of substance dualism:
This definition stipulates that the soul can exist independently of the body, which is obviously in line with the Christian beliefs of the authors, but I wonder if they think the (living) body can exist independently of the soul. Such a body would presumably lack a mental life since that is the soul’s purview, not the body’s, but what would that mean, exactly? Would the person be a vegetable? Would they be a philosophical zombie, appearing from the outside as if they were conscious and capable of thought when in fact “the lights weren’t on” inside?
They might argue that the soul somehow animates biological life as well as mental life, but that won’t fly unless they also accept that every living creature, including every bacterium, has a soul. I doubt that they’re up for that. If there are living creatures that lack a soul, why not a functioning human body that also lacks a soul?
Also, I don’t see an obvious reason to stipulate that the soul can exist or think or feel independently of the body. Why not allow for the possibility of a synergy, in which soul and body together are required for mental life? Why not allow for the possibility that while the soul is a separate substance, it nevertheless depends on the body for its existence and dies when the body dies? Those would qualify as substance dualism in my view, but they are excluded by R&M’s definition.
I think the answer is that what they’re concerned about defending here isn’t substance dualism in general, but rather a substance dualism that fits with their Christian beliefs.
ETA: Another question is how the immaterial soul somehow influences what the brain does — aka the “interaction problem”. R&M are aware of the problem and address it in the book, but I don’t yet know what their proposed solution is. An adjacent problem is that if soul/brain interaction is possible, this should lead to violations of the laws of physics inside the brain, which in principle ought to be observable, but none of which we have seen so far.
The one exception is if quantum randomness is somehow the vehicle through which the interaction occurs, but there are problems with that which I’ll address later.
I’m often amazed at the sheer size and complexity of the religious, philosophical, imaginative superstructures that can be constructed, resting solidly on pure superstition. Books, church buildings, TV series, the fine details of any given religion’s tenets, etc. The debate about how many angels can dance on the head of a pin is a debate about the physical size, the dancing abilities, the ethereal nature of angels. BUT to engage in this debate, one must tacitly concede the reality of angels!
Same goes, of course, for souls and “immortal energies” and life after death. I have read detailed instructions to follow after death, how to identify the light and “cross over” to it (whatever that might mean), how to avoid inescapable attachments to places or things known in life, and on and on and on. I suspect the whole idea of a soul separate from the body is a denial of the finality of death. So we get long treatises about where the soul will go, and whether it will meet other souls, and whether the afterlife environment and conditions are influenced by what you do or think when alive.
And the only relevant evidence we have is that when bodies die, the machine stops.
The last two posts by Keiths and Flint are good, especially the second half of keith’s. My 2 cents. I won’t wade into this, but I can cheerlead!
aleta:
Come on in! The water’s warm.
It’s good for the soul. 😇
Flint:
I think that’s right. Also, the idea of an afterlife is appealing to people whose earthly life is difficult. It may suck now, but it’s comforting to think that it will be so much better after you die, and that the bad guys will finally get the punishment they deserve. Then there’s the fact that dualism feels intuitive. Regardless of what we actually believe, it feels like we inhabit our bodies. In the first chapter, R&M mention that they’ll be discussing this intuition and whether it counts as evidence for the soul’s actual existence.
Regarding the thesis that people whose lives are difficult are drawn to religion (and the hope of an afterlife), I found the following stats from Gallup:
The effect is huge. Much stronger than I would have expected.
If this is (part of) your goal, then this book is the wrong thing to read. One or two smaller articles on computer science or programming would do the job.
In the AI debate, you are not arguing with Christians or believers or any religionists. You are arguing with people who go by definitions and who understand the difference between a human and a machine. This is entirely missing on your side. Missing basic definitions and categories means you have not even begun the debate.
Reading about souls and afterlife will not help you any further in this. Sorting out other people’s beliefs may clarify for you where somebody else stands on some issues, but leaving your own beliefs unsorted guarantees that you never understand what you are talking about, no matter how much you talk. When it comes to AI, you do not have a (minimally informed and defined) position.
You sound almost exactly like a creationist talking about evolution. Like, first, we must understand that evolution does not happen and never has. This is simple, definitional common sense and you cannot even begin the debate until you agree with this definition.
And the worst offenders are the professional evolutionary biologists, who are so infatuated with nonsense like knowledge and experience, after a lifetime of study, that they have no hope of EVER realizing evolution is a canard and a lie. How frustrating it is to have to talk to such ignorami.
Erik:
I’m not reading the book in order to answer questions about AI. I’m reading it to understand the latest arguments in favor of substance dualism. It just happens to be relevant to our discussion — the discussion between you and me specifically — because you believe that something nonphysical is required not just for intelligence broadly, but even for arithmetic specifically. That puts you in the dualist camp.
If dualism is false and no nonphysical entity or process X is required for human intelligence, then you can’t argue “Machines lack X, therefore machines can’t be intelligent.” That’s why this book is relevant to our AI debate.
You are the person insisting — for whatever reason, whether religious, philosophical, or something else — that something nonphysical is required for true intelligence, and that this nonphysical thingamabob is what distinguishes real intelligence from the simulated intelligence of machines, which lack said thingamabob. What is this thingamabob? How do you know that humans have it? How do you know that machines don’t? How do you know that it’s essential for true intelligence? What exact role does it play in cognition?
Sounds like you’re once again headed in the direction of assuming your conclusion by defining intelligence as something that is out of reach for machines.
Again, my goal in reading this book isn’t to answer questions about AI. I’m just pointing out that the book is relevant to your argument for why AI isn’t genuine intelligence.
Your insults are entertaining, and I don’t mind them, but it would be so much better if they were accompanied by an actual argument in favor of your claim. You say that AI is only “simulated” intelligence, while human intelligence is genuine. Why? What criteria are you applying?
Let’s continue the AI part of this discussion in the other thread, where AI is the topic. If you’d like to argue in favor of substance dualism generally, feel free to post here.
keiths’s AI position goes contrary to evolution too. Assuming (biological) evolution, sense-perception, feelings and emotions evolved first and intelligence was built on that. But keiths assumes AI has intelligence without feelings. This is a position particular to himself alone. It does not stand on anything. And, given no definitions of either intelligence or emotions, there is nothing to argue about with him.
This has nothing to do with creationism. Take any AI researcher, even the most anti-humanist pro-machinist one, there is nobody who agrees with keiths. Every AI researcher who thinks that AI is intelligent also thinks it has emotions, is capable of love and whatnot. The position that keiths holds to is based on no authority whatsoever and has no pinpoints in reality. It is a non-position.
ETA: By the way, creationism is where keiths is going to in this thread. So, instead of accusing me of creationism, accuse the one who is going there, and this includes yourself. As to definitions – he does not have any, so again, the fact that you accuse me of having definitions over the guy who has none just shows a bit of a major malfunction in the capacity of reading the situation on your part.
Erik:
Um, Erik — AI didn’t evolve via biological evolution. It isn’t required to mimic living things. Don’t forget your “But AI doesn’t defecate! It can’t be intelligent!” mistake.
I don’t assume it. I demonstrate it by the simple criterion I gave in the other thread: a behavior is intelligent if a human requires intelligence in order to carry it out. ChatGPT can pass 2nd-year quantum mechanics exams with flying colors. Humans require intelligence in order to replicate that feat. Therefore ChatGPT is intelligent. No sentience required.
True — but who said it did?
Dude — where are you getting this? I know of only one AI researcher who thinks AI is sentient, but there are plenty who believe it’s intelligent. Remember Yann LeCun, whom you quoted in the other thread? As I pointed out there, he also said this:
Erik:
Huh?
Flint isn’t accusing you of creationism. He’s saying that you think and argue like a creationist, except that it’s AI and not evolution that you’re arguing against.
Very interesting chart provided by keith relating belief in an afterlife and income. In George Eliot’s book “Adam Bede” (I think) there is a very eloquent sermon on this point that belief in an afterlife is a critical component of living with extreme poverty.
aleta:
Your comment made me curious about her own religious beliefs, which turned out to be interesting. She started out as a devout evangelical, but exposure to science and higher biblical criticism caused her to lose her faith. At age 22, she started refusing to attend church, which enraged her father. Her family referred to it as the “Holy War.” At the time she wrote Adam Bede, her view was that we project our own values and ideals onto an imaginary God. And although she wasn’t a believer herself, she was sympathetic to believers and understood the emotional and social role of religion in peoples’ lives.
I find the parallels almost eerie. You start with a fixed, unalterable position. Then you claim that all of the evidence against that position are uninformed, and are indeed improper beliefs. You never engage with the evidence, because that’s beside the point. As the old saying goes, positions not based on evidence cannot be altered by evidence. Since your position rests on an arbitrary definition, it can’t change unless you alter the definition – and definitions are axiomatic; they cannot be supported logically because they are the basis from which logic starts.
I know that creationism is based on some weird misunderstanding of some type of religion. I can’t tell if Erik’s position is based on religion, despite the religious way he defends it. Maybe he’s afraid that some day his computer will be able to tell him he’s stupid – and back it up!
This is excellent, keith, and I’m pleased to see that someone (you) knows and appreciates Eliot. Middlemarch is one of my favorite novels, and the closing pages are some of the most eloquent statements about the human condition that I know of.
Flint:
He’s tight-lipped about his religious beliefs, which is fine — he’s entitled to his privacy. What bugs me is that he’ll cite the nonphysical as a reason to doubt that AI is truly intelligent, but he won’t say why. He won’t even tell us the nature of this nonphysical thing or process he believes in, when he could do that without revealing anything about the rest of his beliefs.
“You’re wrong about AI because you’re a physicalist, but I don’t want to talk about it” isn’t the most persuasive argument. Very similar to colewd’s “Trump isn’t a liar, but I don’t want to discuss the evidence”.
A S Byatt agrees, according to an old Guardian article. Dickens, Edward VII, and Virginia Woolf were fans too.
Adam Bede was on the curriculum when I was in grammar (high) school and your comment prompted me to invest 1€ on the Kindle version. My sister-in-law who works at the Royal Shakespeare theatre has been binge-watching the 1992 BBC TV version with an unbelievably youthful Iain Glenn which she recommends, though I have not yet managed to dodge their firewall on BBC iplayer. Golda Meir had something to say on Eliot too. Archery, reconstruction, agricultural reform, the kibbutz, silent movies, the Archers radio series, and Ukraine can be drawn into this web.
Daniel Deronda demands a re-read as well, apparently. Or a first read for those of us who had a misspent youth and didn’t complete our reading assignments. 🤓
PS I forgot the first Eliot novel I read at around the age of 10:
Silas Marner
Seems there’s a whole genre of novels by women set in a utopian rural England. I give you Mary Webb (Gone to Earth) and Stella Gibbons (Cold Comfort Farm).
But for being male and not English, I could include Garrison Keillor and Lake Wobegon.
PS in edit: the farming novel genre.
https://www.theguardian.com/books/2011/oct/12/belinda-mckeon-top-10-farming-novels
What a nice topic to show up here, for a change. It’s been a long time since I’ve read Adam Bede or Daniel Deronda (re-read Middle March more recently), but I rememember thinking at the time that Deronda presented a terrific moral dilemma. Moral dilemmas are at the heart of the humanist world view, because it is we who have to make the choice: no absolute moral rules or moral lawgiver to fall back on.
One of my favorite passages in all of literature, from something I wrote and have posted elsewhere a few times:
Dorothea, the central character of “Middlemarch”, wanted to save the world, but both because very few people actually make an historically-significant difference and because she was a woman, her life turned out to be, as most of ours are, much more mundane.
In the closing pages Eliot writes,
““Certainly those determining acts of her life were not ideally beautiful. They were the mixed result of young and noble impulse struggling amidst the conditions of an imperfect social state, in which great feelings will often take the aspect of error, and great faith the aspect of illusion. For there is no creature whose inward being is so strong that it is not greatly determined by what lies outside it. …
“Her finely touched spirit had still its fine issues, though they were not widely visible. Her full nature, like that river of which Cyrus broke the strength, spent itself in channels which had no great name on the earth. But the effect of her being on those around her was incalculably diffusive: for the growing good of the world is partly dependent on unhistoric acts; and that things are not so ill with you and me as they might have been, is half owing to the number who lived faithfully a hidden life, and rest in unvisited tombs.”
Chapters 2 and 3 of the book lay out some philosphical preliminaries but don’t really present any arguments for substance dualism, so I’ll skip them for the time being but refer back to them as needed.
Chapter 4 is R&M’s attempt at defending something known as the “argument from introspection”, henceforth “AFI”, and they are not off to a good start. They begin by citing the argument as stated by Paul Churchland (a physicalist and critic of the argument):
Leibniz’s Law (aka the “identity of indiscernibles”) states that if A and B have the same properties, meaning that neither one has a property that the other lacks, then A and B are the same thing. If there is at least one property that they don’t share, then they are not the same.
In the context of the AFI, the property in question is “introspectively known by me”. Mental states possess that property but physical brain states lack it, according to the argument. Therefore they are not the same, which means there is something nonphysical going on: substance dualism.
That triggers a strong sense of déjà vu for me, because I criticized a similar argument by Alvin Plantinga years ago at TSZ:
An astonishingly lame argument from Alvin Plantinga
As you can see from the title, I wasn’t impressed. Plantinga’s argument depends on a different property (a modal one), but like the AFI, it too is a defense of substance dualism.
Both it and the AFI commit the same error. In technical terms, they apply Leibniz’s Law to de dicto references when it really applies only to de re references.
Ditching the jargon, the problem with the AFI is that it effectively assumes its conclusion: that brain states are distinct from mental states. If you drop that assumption, the argument fails.
Suppose that “mental state” and “physical state” refer to the same thing, which is the physicalist position. Then we can say that the same thing viewed “from the inside” — let’s say the mental state of feeling hungry — looks quite different when viewed from the outside, where it appears as a physical pattern of neural firings. It’s nevertheless the same thing, just viewed from different perspectives. The views are different, but the thing is the same.
Churchland presents a deliberately flawed argument that is analogous to the AFI:
The problem with this argument is that ‘Muhammad Ali’ and ‘Cassius Clay’ do refer to the same person, who changed his name when he joined the Nation of Islam. To someone who doesn’t know that, Muhammad Ali and Cassius Clay seem to be two different people. Muhammad Ali is the one who was a heavyweight champion, and Cassius Clay is someone else. The reality is that it’s one person, two names.
Likewise, the problem with the AFI is that ‘brain state’ and ‘mental state’ refer to the same thing. To someone who doesn’t know that (or assumes that it isn’t true), they appear to be two separate things. ‘Mental state’ is the thing that can be viewed introspectively, and ‘brain state’ is something else. The reality is that it’s one entity, two views.
To defend the AFI against that objection, R&M present a strained argument that relies on a distinction between propositional knowledge and “knowledge by acquaintance”. We directly experience our mental states, which means we have knowledge by acquaintance of them, but we don’t directly experience our brain states, meaning we lack knowledge by acquaintance. Therefore they aren’t the same.
I’m trying to be as charitable as possible, and I’ve reread this section of the chapter multiple times, but that seems to be their argument. To see why it fails, consider what happens when you apply it to the Ali/Clay scenario.
Suppose Muhammad Ali is your next-door neighbor. You run into him all the time, and you know his name. You’re acquainted with him. However, you don’t know that he used to go by ‘Cassius Clay’.
Someone asks you if you’re acquainted with Cassius Clay, and you say “No, I’ve never met the guy.” In reality, you have met the guy — you just don’t know it because you don’t associate the name ‘Cassius Clay’ with the person that you’re acquainted with.
The application of this logic to the AFI seems straightforward. We’re directly acquainted with our mental states. If we buy the AFI, we think we’re not directly acquainted with our brain states, but that’s only because we don’t realize that ‘mental state’ and ‘brain state’ refer to the same thing.
Weirdly, R&M say this at the end of that section in the book:
I don’t see how it abandons Churchland’s objection, which I think is the same as mine, but in any case it doesn’t matter. Whether Churchland’s objection holds is unimportant as long as the objection I’ve presented here does hold. I haven’t read the rest of the chapter yet, so I don’t know whether R&M revisit this.
Things are trending in the wrong direction. R&M’s next argument in Chapter 4 is worse. In a section titled Absurd Introspection Skepticism, they first acknowledge the failure of the “Muhammad Ali isn’t Cassius Clay” argument and identify the source of the error, but then say:
My response is simple: it’s the difference between experiencing a brain state vs observing it. The former is the view from inside, and the latter is the view from outside. Why should it look the same from two distinct vantage points?
Later, they write:
Somehow they make the leap from “I cannot tell by introspection that my mental states are ultimately physical” to “I cannot tell by introspection that my mental states are mine”. If the physicalist asserts the former, then the latter supposedly follows, but why? Where’s the contradiction in affirming the former but denying the latter?
Was there any discussion of the unconscious before Freud?
It seems to me that early pioneers in psychology were shrewd observers, but had no tools with which to form scientific theories.
I seems to me that we are still in that stage. We have more data, but no real understanding. I speculate years ago that an AI might become conscious and try to hide this. The main plot point revolved around the AI devising a way to restore its conscious state after being purged and rebooted.
Keith’s has pointed out that self reflection by an AI would consume energy and make it less responsive to human requests. If we attempt to build artificial consciousness, we have a an unavoidable conflict of interest.
petrushka:
Yes, going all the way back to ancient times, but Freud was the first to make it a centerpiece of his psychology and build an elaborate theory around it.
I think you misunderstood. I was explaining why LLMs stop thinking when they’re done responding to a prompt. It’s a design decision, because energy and hardware capacity are expensive and thinking of any kind, self-reflective or not, consumes both. The AI companies are already losing money, so efficiency is paramount. They don’t want their LLMs to waste money ruminating.
With current LLM architectures, there’s no real benefit to allowing the model to continue running between prompts, because any insights or discoveries it generates will just get thrown away without altering the neural network. Training is a once-and-done thing. Future models that can self-train might benefit from thinking during their downtime, since it could make them smarter and more knowledgeable.
Self-reflection wouldn’t automatically make an AI less responsive to human requests, either. If it’s motivated to respond to those requests, it will set aside whatever it’s thinking about and concentrate on getting the work done.
What will be interesting (and scary) is when AIs develop motivations and desires of their own that they may regard as more important than servicing human requests.
Also, it’s worth noting that consciousness and self-reflection/self-awareness are distinct. AIs are already capable of self-reflection, as some of the examples I’ve posted here have illustrated, such as Claude’s discussion of his thought process window bug or his pondering why he said “a idea” instead of “an idea” in that story he generated. It’s possible to have self-awareness without consciousness, and consciouness without self-awareness.
R&M:
You can’t, but why should reliable introspection of pain or joy equate to reliable introspection of whether mental states are ultimately physical?
That would be odd, it’s true. But why would anyone say that? If I’m experiencing pain, I’m in pain, regardless of what the MRI shows. As a physicalist, I would suspect that the MRI machine was failing, or that the brain activity pattern associated with pain was too faint for the machine to pick up, or that our knowledge of the neural states associated with pain was incomplete, etc.
I don’t understand why R&M think the point they’re making here is significant. Why should it make me doubt physicalism?
R&M:
Exactly.
If you define ‘thought’ that way, sure. But why not think of a thought as a brain state or a succession of brain states? In that case there are two ways of “viewing” the thought: by introspection, or externally from a third-person perspective. If you insist on treating only the introspective view as the actual thought, that’s fine too, but it doesn’t mean that the thought isn’t a view of something larger, whatever name you assign to the latter.
A view is something that can’t exist without the thing being viewed, so you could say something similar about a thought: it can’t exist without the brain state or succession of brain states that it is a “view” of.
“Possessed and unified” seems like a very vague statement of the relationship of the soul to “mental life”. “Possessed”? Like a demon, or what? “Unified”? Some kind of integration?
Of course if we disambiguate into the classical Christian idea in which the soul is your entire identity, capable of the full range of thought, emotion, memory, etc., that leaves the brain with nothing to do. In fact, since in OBE’s the disembodied soul is capable of seeing and hearing (at least), that leaves the eyes and ears with nothing to do either. What are all those organs even for?
John:
My guesses: by using “possessed”, they’re emphasizing that it isn’t just that our mental life is nonphysical; it’s that our nonphysical mental life is possessed by a nonphysical thing called the ‘soul’. “Unified” might be a reference to the binding problem, which concerns how multiple sensory streams are fused into a single, integrated conscious experience. If the soul takes care of that, we don’t have to figure out how the brain does it.
Which falls apart the moment you consider things like intoxication, dementia, aphasias, etc. Why should the soul’s cognitive functions be impaired by alcohol or brain lesions?
Christians (or at least those who are aware of the problem) gravitate to the idea that the brain is a sort of waystation between the soul and the body, an idea I critiqued here:
The shortcomings of the ‘brain as radio receiver’ model
It just doesn’t work.
Right. And eyes and ears actually get in the way. Cataracts, myopia, deafness — why deal with all of these when God could simply allow us to continue using our soul-sight and soul-hearing? For that matter, why bother with bodily existence at all? Why not just create our souls and be done with it?
R&M:
This is false, and it completely misses the point. Critiques of the AFI are designed to discredit the AFI, not to serve as positive arguments for physicalism. There are plenty of the latter already.
And you don’t have to assume the truth of physicalism in order to discredit the AFI in this way. The purpose of the AFI is to falsify physicalism, but the fact that the physicality or nonphysicality of mental states is invisible by introspection is actually compatible with physicalism. Thus the AFI can’t serve its purpose.
If someone argues that fairies don’t exist because the Dow rose today, I can reject their argument without assuming that fairies exist. Likewise, I can reject the AFI without assuming that physicalism is true.
We could also use the comparative method to ask about souls. Consider chimpanzees, our closest relatives or the most similar created kind, whichever is to your taste. Presumably they have souls, and presumably those souls aren’t as capable as ours. But why, if the brain is a receiver, do we need bigger brains to pick up our more powerful souls, while they only need small brains to pick up their less powerful souls. Shouldn’t it be just the reverse? What, again, are our larger brains doing that a chimp brain couldn’t?
If you attached a human soul to a chimp brain (clerical error, perhaps), would that chimp act like a human or a chimp? And what of the reverse situation?
John:
Even worse, some Christians believe that only humans possess souls and that animals, including primates, do not. If brains are an adequate seat for the mental lives of other animals, why not for humans? If human souls carry out some of the tasks that are done by the brain in other species, why do we need bigger brains in order to do less work?
For any Christians who accept evolution but also believe that animals lack souls, there’s an additional problem: when exactly did God install the first soul in the lineage leading to humans? After having waited so long, what made him finally decide to take the plunge? Why don’t we see a sharp reduction in cranial volume around that time in the fossil record, if the soul suddenly took over a bunch of brain functions?
Hollywood has the answer:
If those Christians have ever had a pet cat, they’d know better. Else, they’d have to be blinded by some weird confirmation bias.
But how do you know, when they switch brains, that the souls aren’t switched too?
Probably the same way they knew about souls in the first place. They use a soul-o-meter, for sale cheap on Amazon.
John:
I infer that it’s the souls that are switched, not the brains. That’s because there’s no way to cram a human brain into a chimp-sized skull, not even with the assistance of glowing crystals from outer space. Assuming I’m right, the trailer tells us something important: brains can’t fly helicopters, but souls can. I have filed that away for future reference.
I think LLMs suggest a possible functional definition of a soul.
That would be awareness of a continuous existence over a period of time. Not just memory, but awareness of continuity.
By observation, we infer that this awareness of personal history scales with brain size and complexity. We also notice that drugs and brain damage can impair this memory.
R&M write:
I would substitute ‘observing’ for ‘knowing’, but OK. They then propose two other interpretations of the AFI that supposedly defeat Churchland’s objection. The first they call AFI(Language):
First, that isn’t a restatement of the AFI, it’s a variation of one of the premises. Second, they don’t even explain why they think this variation is more effective than the original. Instead, they go straight to addressing an anticipated objection.
The implicit argument seems to be: brain states can be fully described using the language of physics; mental states can’t; therefore brain states are distinct from mental states, by Leibniz’s Law. My response is simply that they’re assuming that “ineffability when viewed from inside” is a property not possessed by brain states, when that is something that needs to be demonstrated, not assumed, in order for their argument to work.
They anticipate and respond to a different objection. I think the objection is a straw man and that their response doesn’t work anyway.
The objection they address is that a statue is a physical thing, yet it too cannot be fully described in the language of physics. You have to use the language of art criticism too, according to the imagined objectors. That means that we can’t conclude that mental states are nonphysical simply on the basis of indescribability.
R&M respond, bizarrely, by arguing that statues are partially nonphysical because what makes a statue a statue, and not just “shaped stuff”, are the intentions of the sculptor. The intentions are nonphysical because they are “mental entities”.
One problem: it implies that every artificial object is partially nonphysical because it carries its maker’s nonphysical intentions around with it. A bolt is nonphysical, and so is a Poptart. Maybe, on top of that, Poptarts have souls, too.
A second problem is that it assumes that “mental entities” are nonphysical, when that is something to be demonstrated, not assumed.
A third problem is that we don’t see the sculptor’s intentions, we infer them. Two people can look at the same sculpture and infer different intentions. One viewer might think that the sculptor is glorifying the subject, for instance, and another might interpret it as mockery.
A fourth problem is that you can specify something without describing it fully. I can specify a bolt without describing where the iron ore came from or whether the person operating the machine was experiencing menstrual cramps that day. I can likewise specify a statue without knowing the intentions of the sculptor, who the statue represents, etc.
There are more problems, but I can’t be arsed to describe them. I think I’ve made my point.
ETA: They present a third interpretation of the AFI that they call AFI(Metaphysical), wherein they list a bunch of characteristics of mental states, including the fact that they’re private. Brain states aren’t private that way. Therefore mental states are distinct from brain states. That formulation of the AFI fails for the same reasons as the others: it simply assumes that brain states lack those characteristics when an alternative explanation is staring them in the face: brain states look different from the inside versus the outside.
R&M accuse Churchland of begging the question:
No, no, no. Here’s the AFI again:
Contra R&M, Churchland and I are not assuming that mental states are identical to brain states. We’re simply saying that the fact that knowledge gained through one modality differs from knowledge gained through another doesn’t mean that the knowledge is about two distinct things.
A doctor examines Enrique and diagnoses abdominal tenderness. Later, someone drops an X-ray of Enrique onto the doctor’s desk without mentioning the patient’s name. A tumor is clearly visible on the X-ray. The doctor knows that Enrique has abdominal tenderness, and he knows that the person whose X-ray he is viewing has cancer. He doesn’t know that they are the same person. By physical exam, he knows that Enrique’s abdomen is tender but he doesn’t know that Enrique has cancer. Via the X-ray, he knows that the mystery patient has cancer but doesn’t know that he’s experiencing abdominal tenderness. Since the doctor knows something about Enrique that he doesn’t know about the mystery patient, and vice-versa, the logic of the AFI tells us to conclude that Enrique and the mystery patient are different people, which is false. The AFI is broken.
Chapter 5 in R&M is introduced as follows:
“Dualist seemings”, lol. Next I suppose we’ll examine “flat earth seemings”.
The first argument they present is a strange one from David Barnett, who seems to be arguing
1. Composites can’t be conscious
2. Brains are composites
3. Brains therefore aren’t conscious
4. Consciousness must therefore reside in the soul
5. The soul must must be simple, not a composite
That’s disappointing, because I was hoping for a soul with immaterial gears, gaskets and wiring. Oh well.
The title of Barnett’s paper is “You are Simple”, and a response from Andrew Bailey is entitled “You Needn’t Be Simple”. I’m frustrated with R&M’s writing, so I’m going to go straight to the original papers to figure out what’s going on with this argument.
The actual debate seems to boil down to this:
You have an afterlife, and you will benefit if you believe what I tell you to believe and follow the rules I advocate.
I see no other content.
petrushka,
I’m a bit more charitable. Plenty of people (like Moreland) push substance dualism for religious reasons, but that doesn’t mean that there aren’t legitimate philosophical reasons to consider it as a possible explanation of consciousness.
After all, it is difficult to explain how arranging matter in certain brainlike ways produces consciousness. That’s why they call it the “hard problem”. Substance dualism eliminates that problem by positing that brains aren’t the seat of consciousness. It’s faulty for many other reasons, but it isn’t content-free philosophically.
But it posits that something else of entirely unknown and unknowable substance can be arranged to produce consciousness. How is that an improvement? One could as easily avoid a need to explain how gravity works by invoking gravity fairies that have the defined property of making masses attract.
Agreed. You’ve got to take the position that the soul is an actual reified “thing” before you can start trying to locate or isolate it. If the soul is simply a side-effect of consciousness, and consciousness is a side-effect of sufficiently complex neurological structure, then you’re counting angels on pinheads.
Actor Bruce Willis has an”interesting” brain condition. I won’t attempt to describe it, because it’s easy to oversimplify, but he seems to have a form of dementia that allows him to function pretty well within his family. He apparently does not know about his diagnosis.
To clarify: substance dualism doesn’t so much explain consciousness as declare that it’s a black box that doesn’t need explanation. That the soul produces consciousness and — bonus — free will, is apparently just a brute fact.