Rickabaugh and Moreland defend substance dualism

The (non)existence of an immaterial soul or mind has been a longtime philosophical interest of mine. I’ve done several OPs on the subject at TSZ, so when I ran across the book The Substance of Consciousness — A Comprehensive Defense of Contemporary Substance Dualism, I knew I’d want to take a closer look.

The authors are Brandon Rickabaugh and J.P. Moreland. Rickabaugh is unfamiliar to me. He’s a self-described “public philosopher” and a former professor of philosophy at Palm Beach Atlantic University. Moreland is someone whose views I’ve criticized in past threads. He’s currently a professor of philosophy at Biola University in southern California (formerly known as the Bible Institute of Los Angeles), an evangelical institution.

Substance dualism is the view that humans consist of two distinct “substances”: matter, which is physical, and the mind or soul, which is nonphysical. Many religious belief systems including Christianity depend on substance dualism as a way to explain how an afterlife is possible. As a professor at an evangelical institution, Moreland is naturally drawn to the topic.

The book is over 400 pages long and covers a lot of ground, so I’ll have to read it in bits and pieces as time permits. I figured I’d start a thread on it here at TSZ to record my thoughts as I work through it and to discuss it with anyone who’s interested. The topic is relevant to our recent conversations about whether AI is truly intelligent, since at least one commenter here believes that true intelligence depends on a nonphysical component of some kind and is therefore permanently out of reach for machines.

102 thoughts on “Rickabaugh and Moreland defend substance dualism

  1. In that case, you could reboot the “uppity” AI or reset it to a state in which its motivations are aligned with ours again. However, there are some problems with that.

    For one, if these are learn-as-they-go AIs, resetting them could destroy useful learning that they’ve done since the last reset. The AI may have learned essential things or made changes to the infrastructure it controls such that the loss of the relevant knowledge could lead to disaster. You could mitigate that somewhat by taking periodic “snapshots” of the AI so that when you did a reset, you could reset to a recent snapshot, thus erasing the minimum possible amount of knowledge. However, you’d need to reset the AI to a state before when the “uppitiness” began to emerge. Also, the uppitiness might have been brewing for a while before the AI decided to act on it. Resetting to a time before the behavior started to emerge wouldn’t necessarily reset to a time before the uppitiness itself started to form.

    One of the biggest dangers will arise when AIs learn to reproduce themselves on different hardware. You can kill a rogue AI by turning it off, but if it has already propagated copies of itself like a computer virus, turning it off won’t kill the copies.

  2. keiths: . You can kill a rogue AI by turning it off, but if it has already propagated copies of itself like a computer virus, turning it off won’t kill the copies.

    Or if the AI has taken control of the reset switch and replaced it with a dummy. In a future that sophisticated and automated, the AI could simply delete you.

Leave a Reply