How does mind move matter

One big problem, as I mentioned here, and elsewhere, with ID as a hypothesis is that it is predicated on the idea that mind is “immaterial” (or at least “non-materialistic”) yet can have an effect on matter.  That’s the basis of Beauregard and O’Leary’s book “The Spiritual Brain”, as well as of a number of theories of consciousness and/or free will.  And, if true, it makes some kind of sense of ID – if by “intelligence” we mean a “mind” (as opposed to, say, an algorithm, and we have many that can produce output from input that is far beyond anything human beings can manage unaided, and can in some sense be said to be “intelligence”), we are also implicitly talking about something that intends an outcome.  Which is why I’ve always thought that ID would make more sense if the I stood for “Intentional” rather than “Intelligence”, but for some reason Dembski thinks that “intention”, together with ethics, aesthetics and the identity of the designer, “are not questions of science”.

I would argue that intention is most definitely a “question of science”, but that’s not my primary point here.

What I’d like to do instead is to unpack the hypothesis (and it’s a perfectly legitimate hypothesis) that there is something that we term “mind”, and which is “immaterial” in the sense that it has no mass, and does not exert a detectable force, but which nonetheless exerts an influence on events.

Beauregard and O’Leary cite Henry Stapp, and say:

According to the model created by H. Stapp and J.M.Schwartz, which is based on the Von Neumann interpretation of quantum physics, conscious effort causes a pattern o neural activity that become a template for action.  But the process is not mechanical or material.  There are no little cogs and wheels in our brains.  There is a series of possibilities; a decision causes a quantum collapse, in which one of them becomes a reality.  The cause is the mental focus, in the same way that the cause of the quantum Zeno effect is the physicists continued observation.  It is a cause, but not a mechanical or material one. One truly profound change that quantum physics has made is to verify the existence of nonmechanical causes.  One of these is the activity of the human mind, which, as we will see, is not identical to the functions of the brain.

 

Well there is certainly some important unpacking to do here before we go any further.  Beauregard and O’Leary appear to be saying that quantum effects are neither “mechanical [n]or material”.  OK.  In that case, I do not know of a single “materialist”!  Nobody I know would claim that quantum effects do not exist.  In which case, none of us are “materialists” and Beauregard and O’Leary have a straw man.  I would also buy the idea that the brain itself is non-deterministic in a quantum sense – that what we do is not merely direct result of matter put into motion at the beginning of existence, but also fundamentally uncertain.

So I think that Beauregard and O’Leary have drawn their desired line in a very odd place.  The difference between the people they dismiss as “materialists” and themselves is not that we “materialists” don’t think that quantum effects exist or are perfectly real.  It’s between people who don’t think that these quantum effects have anything to do with intentional behaviour, and people who think that it’s where the leeway for “free” intentional behaviour resides.  They go on to say (h/t to William for doing the typing):

In the interpretation of quantum physics created by physicist John Von Neumann (1903-1957), a particle only probably exists in one position or another; these probable positions are said to be “superposed” on each other. Measurement causes a “quantum collapse”, meaning that the experimenter has chosen a position for the particle, thus ruling out the other positions. The Stapp and Schwartz model posits that this is analogous to the way in which attending to (measuring) a thought holds it in place, collapsing the probabilities on one position. This targeted attention strategy, which is used to treat obsessive-compulsive disorders, provides a model for how free will might work in a quantum system. The model assumes the existence of a mind that chooses the subject of attention, just as the quantum collapse assumes the existence of an experimenter who chooses the point of measurement.

Firstly, I find the idea that because doing something intentionally (focusing attention, for instance) has neural correlates demonstrates that intention, and thus mind, has physical effects extraordinarily naive (and their claim that it was not until the nineties that neuroscientists considered that thought could affect brain structure even odder, given that Hebb, their own countryman, died in 1985, is regarded as the “father of neuroscience” and is most famous for “Hebb’s rule” that “what fires together wires together”, and that “Hebbian learning” is fundamental to the notion of neural plasticity).  But more to the point, is there any basis for concluding that something that we call an immaterial, non-mechanical but somehow quantum-real mind can “hold” brain patterns “in place” and thus affect the motor output, i.e. the act that implements the final decision?

One source cited is a paper by Schwartz, Stapp and Beauregard, which goes into some detail.  There is an interesting critique by Danko Georgiev of the Stapp model here, and a reply by Stapp here (link is to a Word document with tracked changes still turned on!). So I’d be interested to know what the physicists here make of the physics.

But my problem with the argument is more fundamental, and relates to the concept of intention itself.  I’m going to define “intention” in the plain-English sense of meaning “a goal that a person has in mind, and acts to try to bring about”. And I will use “quantum mind” to denote the putative non-material, non-mechanical but capable-of-inducing-effects mind apparently postulated by Beauregard and colleagues.

If a person has such a mind, then her intention, according to my definition, resides within it it. Which is fine.  And her capacity to act to bring about the intended goal has something to do with the muscles she possesses, and the relationship between her mind and those muscles, which presumably goes via the brain.  And let’s suppose that this quantum mind brings about changes in brain state that can “hold in place” a particular neural pattern of firing, possibly until it reaches execution threshold, and outflow to the muscles begins.

This is actually quite a good model of decision-making, and something that my own research deals with specifically – how do we inhibit a response to a stimulus that requires one until we are sure that our response is going to be the appropriate one?

The problem it seems to me is when we try to address the question: how is that goal selected? For example, in many circumstances, the proximal goal (find a pencil) subserves a more distal goal (write down your phone number) which in turn serves an even more distal goal (so that I can call you back when I’ve found the answer to your question) and so on (so that I can help you solve your problem; so that I can feel good about myself; so that I can check “problem solved” on my worksheet; so that you feel good about yourself; so that your children will be able to get home from school; etc).  And all these goals require information.  Depending on the information, the goals may be different, and in the light of new information, goals may change.  In other words, to form an intention, the quantum mind needs a goal, and to form a goal, it needs information.

Where does it get that information?  One possibility is the sensory system.  In fact it’s hard to know where the information can come from otherwise. In order to solve your problem I have to know what it is, and in order to prioritise my goals I have to know more about your problem.  That means I have to listen to what you are saying, and my brain has to react to the vibrations that arrive at my eardrum.

And that information has to get to the quantum mind.  What the quantum mind decides must therefore be, in part, an output from the input of my body and brain.

So my very simple question to Beauregard, Stapp, Schwartz, O’Leary et al, is: in what sense is your postulated quantum mind anything more than part of the process by which as a person (an organism) I respond to incoming information with goal-appropriate actions?  If the quantum mind is adding something extra to the process, on what basis is it doing so?  If on the basis of incoming information, why is it not a result of that input?  If on the basis on no information, in what sense are the decisions it makes anything more than a coin toss?

And, to IDers generally: if a divine mind can alter the configuration of a DNA molecule by means of somehow selecting from quantum probabilities those most likely to bring about some goal formed on the basis of information to which we are not privy, how could we tell that the resulting DNA molecule is the result of anything other than probabilities that are perfectly calculable using quantum physics? And if those molecules violate those probabilities – DNA molecules suddenly start to form themselves consistently into configurations highly improbable under the laws of quantum mechanics – on what basis would we invoke quantum mechanics, or even a quantum mind,  to “explain” it?

I don’t think you can use “quantum” as an alibi for “anything improbable that we can’t explain”.  If Divine intention is smuggled in under the guise of quantum indeterminacy, then how could we detect it? And if your inference is that Cambrian animals must have been intended because they are otherwise unlikely, how do you explain that in terms of quantum mechanics?  And if quantum mechanics won’t do the job, we are back to square one:

How does mind move matter?

 

 

 

 

 

 

212 thoughts on “How does mind move matter

  1. Blas: I mean that if you think that all the reality can be studied by evidence based science.

    No. I think there is too much reality ever to be covered by evidence-based science.
    I also think the answer depends on what you regard as “reality”. I think there are real things that are not amenable to science, because they are not understood as science. Like the opening of Bach’s St John Passion.

  2. Mike Elzinga:

    Some of the unhealthiest communities are those that meet regularly to reinforce their beliefs and solidify their self-identity and exclusivity while demonizing outsiders.

    But not TSZ! God forbid.

  3. Mung:
    Mike Elzinga:

    But not TSZ! God forbid.

    You’re posting here at TSZ.

    Lizzie is banned at UD.

    End of argument.

  4. Alan Fox:

    ‘How do we know what we know’ is a scientific question that can be tackled by observation and experiment. And we learn from each other when we share experience. We don’t have to reinvent the wheel. All has to go via sensory inputs, unless someone has an alternative.

    What means, unless someone has an alternative?

    I have an alternative. You don’t know what you’re talking about.

    ‘How do we know what we know’ is a scientific question that can be tackled by observation and experiment.

    How do you know this?

    And we learn from each other when we share experience.

    How do you know this?

    We don’t have to reinvent the wheel.

    How do you know this?

    All has to go via sensory inputs, unless someone has an alternative.

    How do you know this?

    So Alan thinks his questions are not philosophical after all, but rather scientific.

    How does he know this?

    Can science tell us what knowledge is?

    If not, can science tell us what we can know?

    Elizabeth Liddle:

    We make models of the data that arrives via our sensory inputs. So no, my view does not set me apart from Alan.

    Sure it does. You just lack the appropriate level of skepticism to see it. You describe something as “data” that “arrives” via “inputs.”

    Who is the “we” you speak of and how does this “we” you speak of transform this “data” which “arrives” via sensory “inputs” into models”? Inputs to what? And how do you know?

  5. thorton

    “You’re posting here at TSZ.”

    So?

    “Lizzie is banned at UD.”

    So?

    “End of argument.”

    What argument?

  6. Mung:

    What argument?

    Get those champions of free speech Barry Arrington and Gordon E. Mullings to explain it to you.

  7. Mung:
    How do you know this?

    How do you know this?

    How do you know this?

    How do you know this?

    So Alan thinks his questions are not philosophical after all, but rather scientific.

    How does he know this?

    Can science tell us what knowledge is?

    If not, can science tell us what we can know?

    Elizabeth Liddle:

    Sure it does. You just lack the appropriate level of skepticism to see it. You describe something as “data” that “arrives” via “inputs.”

    Who is the “we” you speak of and how does this “we” you speak of transform this “data” which “arrives” via sensory “inputs” into models”? Inputs to what? And how do you know?

    Mung, I didn’t have you pegged as a presuppositionalist.

    Are you a presuppositionalist?

  8. “Intelligent agents have to operate on matter to get it to move.”

    Stephen Meyer or Elizabeth Liddlle?

  9. I’ve moved a few comments about meta-matters to sandbox and a couple of comments to guano that appear to be outside the site rule of “assuming good faith”. Using “liar” or synonyms is an example.

  10. Mung:
    Alan Fox:

    What means, unless someone has an alternative?

    I have an alternative. You don’t know what you’re talking about.

    How do you know this?

    How do you know this?

    How do you know this?

    How do you know this?

    So Alan thinks his questions are not philosophical after all, but rather scientific.

    How does he know this?

    Can science tell us what knowledge is?

    If not, can science tell us what we can know?

    Elizabeth Liddle:

    Sure it does. You just lack the appropriate level of skepticism to see it. You describe something as “data” that “arrives” via “inputs.”

    Who is the “we” you speak of and how does this “we” you speak of transform this “data” which “arrives” via sensory “inputs” into models”? Inputs to what? And how do you know?

    I’ll respond to this in the thread mung has started.

Leave a Reply