This is a thread to allow discussions about how those lucky enough to have free will make decisions.
As materialism doesn’t explain squat, this thread is a place for explanations from those that presumably have them.
And if they can’t provide them, well, this will be a short thread.
So do phoodoo, mung, WJM et al care to provide your explanations of how decisions are actually made?
This discussion started when you claimed “matter can not choose”. The definitions we’ve been using for “decide” and “choose” do not make any assumption about conscious thought nor do they assume that something immaterial is required for making a decision (the latter would be begging the question).
If you don’t think that a material process like a human brain or certain software systems can make decisions, you need to defend that claim based on the definitions we’re using, not by assuming your conclusion.
So, based solely on these definitions:
decide: to choose between one possibility or another (your definition)
choose (from the dictionary)
a) pick out or select (someone or something) as being the best or most appropriate of two or more alternatives.
b) decide on a course of action, typically after rejecting alternatives.
please support your claim that material processes cannot, even in principle, make decisions.
Indeed, definitions are important. “Select” means to choose. fifthmonarchyman and I have been using this standard definition for “choose”:
a) pick out or select (someone or something) as being the best or most appropriate of two or more alternatives.
b) decide on a course of action, typically after rejecting alternatives.
It’s also important not to commit fallacies like begging the question. The question in this subthread is whether or not material processes can make decisions. By all of the definitions provided thus far, they can. Are you going to be joining fifthmonarchyman in insisting that the definition include your preferred conclusion?
I offer as evidence your last half-dozen or more replies to me. You have consistently been ignoring the definition you provided.
I suppose that mostly it seemed like one of those things to throw at the problem, just say it’s irreducibly complex to rhetorically make it sound impressive–and who is to say otherwise about a made-up entity of “mind” that is not brain or its processes? But beyond that there’s probably the old idea that both soul and body are essential to mortal human existence, and once the soul goes away the human dies.
On the other hand, though, the soul needs the body to function as a mortal human, so if the brain/body is damaged the soul can’t just make up for the brain’s losses, since as a mortal the two are inextricably linked. To put it another way, why would the soul get drunk from alcohol? The thing is that the immaterial “mind” seems to be tied to earthly things, so it’s not merely trying to make a meaningless but important sounding connection between “mind” and brain to say that they’re connected in an irreducibly complex manner, it’s also trying to account for the fact that “mind” seems oddly affected by material changes in the brain.
Of course it’s no more an actual explanation of anything any more than we’ve received an explanation for how “mind” makes decisions that the brain supposedly can’t (also not explained). But it is a rhetorical attempt to account for the fact that “mind” is affected by body even though it’s supposedly immaterial. There’s a kind of logic to it, I think, although of course it’s merely the logic of rationalizing one’s beliefs to fit inconvenient facts.
Glen Davidson
Well, this is the whole reason why your side has to give some kind of explanation for how a brain, a purely materialistic brain made only out of purposeless chemicals can make a decision.
And since your side steadfastly refuses to offer anything in the way of an explanation, preferring instead to dodge and pretend the question hasn’t been asked, we are left with no choice but to reject such an extraordinary claim.
Considering is just responding to stimuli.
Pinballs do,yes. Some programs, otoh, are not simply passive, but actually do change based on input. That, to me, is the basis of decision-making capability. And no, consciousness is not necessary for decision-making.
Your opinion is duly noted and summarily rejected.
I can’t believe you’re so biased by your religious views…
I’m sure there are lots of things lots of people can’t believe take place because from those peoples’ perspectives, things are just SOOOO obvious. Ironically, chances are over half those people are wrong.
Not only do some programs (or collections of programs) change based on input, some change their behaviors to the same inputs over time based on the results of previous actions. In other words, they learn.
Quite so.
You should give it a shot. You can’t be any worse at it than Patrick and keiths.
I don’t see how it can make sense to say that computers make decisions (although they can of course implement a decision-making procedure).
Consider a lynx stalking a rabbit. She must decide from what angle to approach, and how quickly or quietly to move, based on her understanding of the position and distance of the rabbit, what she knows about how rabbits evade predators, the relevant detectable environmental factors (light, available cover, wind direction). And all this is evaluated in light of what the lynx wants (this particular rabbit) because of what she needs (food) in order to satisfy her purposes and goals (staying alive).
Though the lynx’s decisions are far less complex than those of a human being, we can still see in this crude example that she decides in light of what purposive actions will best satisfy her needs and desires, in light of what is known to be case (insofar as she knows it).
Nor is this anthropomorphizing the lynx — though of course it would be anthropormorphizing the lynx if we were to describe her as being aware of herself as engaged in the activity of deciding (as in cartoons, where animals are portrayed as trying to make up their minds by talking to themselves just as we do).
But computers lack the defining characteristic of organisms: intrinsic purposiveness. A computer, like any machine, is designed to carry out a set of prescribed tasks. In the case of computers, the prescribed task is to simulate the actions of any machine that can defined as a set of instructions. The computer has no goals or values of its own because it does not act in order to maintain its own existence. That’s what it means to say that computers are not alive.
While I do not want to say (necessarily) that all living things can make decisions, I think it is relatively clear that nothing can make decisions if it is not alive. For that reason I think that talking about what computers can and can’t do is unlikely to clarify the nature of decision-making, even if a computer can simulate it or implement a decision-procedure with which it is programmed.
At the same time, the defender of agent causation will have to either say that lynxes and many other animals are also initiators of novel causal chains as we are, or else say that the difference between a lynx’s decisions and a human’s decisions are different in kind and not just in degree — say, in degree of prefrontal cortical modulation over the limbic system.
And software does not consist of changing information either.
So when you launch you browser your computer decides to start your browser app, not you? And when you type in a url, your computer decides to go visit that page, not you?
I can! This is where physicalism gets you. Absurdity.
This morning my computer thought it best to install an upgrade and even selected the best upgrade to install!
First, there must be parts. Something without parts cannot be irreducibly complex.
God has no parts, therefore God cannot be irreducibly complex.
The system capable of transcribing and translating DNA sequences into amino acid sequences is composed of many parts, and may therefore be irreducibly complex.
Makes sense so far?
An obstacle to DARWINIAN evolution. You may not think that’s an important distinction to make, but it seems important to the IC crowd. 🙂
I’m in agreement here. In what sense is the mind a part of something, and what is it a part of? Can “parts” be immaterial?
Actually, no.
http://www.evolutionnews.org/2016/09/michael_behes_c103159.html
This mostly seems right to me.
Yes, and that’s why your analysis seems right.
It can be useful to talk of computers as if they are making decisions. But when we do that, we are relying on our own purposes rather than any intrinsic purposes of the computer.
Do you get a pass when it comes to defending your claims, Patrick?
Because I’ve seen no defense from you of your claim that software can make choices. None. Will you be retracting it? No? Thought not.
Very nice post.
Kantian Naturalist,
I think that “decide” is a rather lumpish term. It can mean “pick” or “choose,” and I don’t see how a computer cannot decide in those senses, at least in the broadest sense. On the other hand, it does seem a bit odd to think of a computer deciding something, mainly because that term typically refers to “thinking,” which is often not ascribed to computers while it is to humans. We say that someone “made a decision” typically as a black box term, while a computer might “select” or “pick” something by running code. But when a child chooses a toy over something else, we may refer to that as a decision, so we’re back to the fact that partial synonyms for “decide” happen to fit some of what computers can do.
It’s not clear that wants, or purpose, is necessary for a decision to take place. But it may be that the black box aspect of decision making might be typical of the meaning of “decide,” and I think that we’re less likely to call someone’s resulting pathway a “decision” if it’s due to some rather mechanically rational process. So someone calculated the best route to take, rather than decided the best route to take, because the calculation found the shortest distance, while the decision might have balanced the scenic value of a route as well as aiming for a reasonably short journey. One can’t calculate the scenic value, so one has to decide. But one might also have to decide the best route even if the shortest distance is all one cares about if one cannot find or calculate which is the shortest route.
So maybe we’re less happy about calling a programmed response (even if part of it is “learned programming) a decision. But what if the computer uses random noise to come to a “decision” where programmatic response is impossible? One thing has to be chosen, nothing makes one choice better than another, so random noise causes the computer to pick a “random” thing. Then it seems more like a decision, yet the fact that it’s all “mechanical” response, even to picking a random event to yield a certain result, makes it seem less like a decision even so. It’s not “thinking,” which really is substantially different from digital computing.
Having written all of that, I would note that “decision making” is indeed a term used in making programs for doing just that, or at least something a lot like it. Hence we just get back to the problem that “decision” can cover a host of phenomena that lead to one result rather than to other ones, and that we just have to, well, decide on what we mean by “decision making” for a given phenomenon.
What really remains utterly unknown is how, or if, anything immaterial (meaning non-physical) can ever make a decision, whatever one might mean by that term.
Glen Davidson
Exactly so.
It has a real meaning. https://en.wikipedia.org/wiki/Muller%27s_ratchet
I’d mildly dispute that. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2811146/
Obviously a flat battery can be charged, but (from the link) the robots evolved altruistic behaviours that worked to maintain their existence (battery life).
If we (somehow!) gave them a voice I’m sure they’d ask for some sweet sweet electrons first of all 😉 But they seem to act to maintain their own existence to me, in their way.
This reminded me of the sometimes weird videos of trial and error learning with robot arms. http://www.popsci.com/googles-robots-are-learning-hand-eye-coordination-with-artificial-intelligence
walto,
Fifth brought up irreducible complexity as an excuse for not addressing the interaction problem:
keiths:
fifth:
Fifth couldn’t answer the question, so he gave an excuse.
It’s a failed excuse, because nothing prevents us from examining or explaining the interaction of parts in an irreducibly complex system.
From that link:
Deciding? Old hat. Programs are realizing things now 🙂
A good point. Fmm, how do you know it’s an irreducibly complex system? Is it just a belief? Did you read it in a book? Did a voice from the sky let you know? How old were you when you discovered this? Presumably you were not born knowing it, when did you find it out? How?
There is also an IC Wiki page. https://en.wikipedia.org/wiki/Irreducible_complexity
Right. I’d have thought something must be irreducible in an irreducibly complex system and that the system as a whole must be wildly complex, but based on the definition that KN gave, nothing actually has to be either. It’s just kind of a nice phrase to utter when one is waving one’s hands.
Do you really not understand. I suggest you go back and read the definition again maybe you will finally get it. I’m not ignoring it.
The definition itself precludes software from deciding
Exactly, choosing is an act that requires conscious awareness. That is beyond obvious.
You never said
Do you thing a river chooses the path it will take to the sea?
Do you think a corpse decides to undergo rigor mortis?
peace
Of course they do they presuppose conscious thought.
unconscious choice and unconscious decision making are oxymorons.
peace
So all the Freudians are wrong about the possibility of unconscious thought? Do you never wake up in the morning and realize that some decision you’d been struggling with for seeks had suddenly been made?
Quite right, and it is this whole problem of “being alive” that science, after all these years of considering it, can really say nothing about. Why does something become alive? How does something become alive? How does simply being alive immediately lead to the process of thoughts and reactions?
The fact that this is such an elusive and unlikely to ever be solved question for man, lends credence to the notion that nature alone can not explain it. If nature could explain “aliveness” we might be justified in dismissing immaterial forces. But it can’t. Why is it so hard to grasp aliveness, if it is merely a result of chemical reactions?
There is no good explanation for the elusiveness of this answer, other than it can’t be found in nature alone.
Alive means to be able to choose. That, in and of itself, is essentially what separates alive from not alive. That’s pretty amazing if you think about it.
walto,
There’s nothing wrong with the concept of irreducible complexity per se, just with some of the uses to which it is put, in support of ID and in fifth’s excuse for punting on the interaction problem.
In an irreducibly complex system, it’s the complexity that is irreducible, in the sense that reducing it causes the system to lose its original function.
The degree of complexity doesn’t matter, as long as reducing it leads to the loss of function mentioned above.
The IDers’ failed argument is that IC systems of a sufficient complexity can’t evolve by building up the system step by step, because only the full system can be favored by selection, and the full system is overwhelmingly unlikely to come about by chance.
That argument fails because
a) it assumes that the function of a system can’t change during the course of its evolution; and
b) it ignores the fact that IC systems can evolve by the removal of parts from non-irreducibly complex systems.
Fifth’s excuse fails because even if it were true that the brain and the immaterial mind formed an irreducibly complex system, it wouldn’t mean that the interaction between the two can be taken for granted and does not need to be explained.
I’m reminded of Sal Cordova’s hilarious attempt to rescue Behe by redefining IC to explicitly prohibit a change in function. I think it was at UD a few years back now. (Might even have been at ARN)
Mind-body interaction: What’s the problem?
Yes, if it is IC it can become even more IC. Evolution itself requires an IC system.
You know, John von Neumann.
well for starters when consciousnesses is not present then decisions are not made by the brain.
Think about a corpse or a coma patient or sleep.
In this universe the function (deciding) requires both a brain and consciousness.
If either is absent the function ceases
That is what happens at the moment of death or sleep. The brain is still there and often still functioning normally but all deciding stops.
peace
A lot of stuff can go on in an unconscious brain.
I would not include thought here is definition and a short list of synonyms to illustrate what I’m talking about
quote:
Thought- a careful weighing of the reasons for or against something
Synonyms account, advisement, debate, deliberation, reflection, study, consideration
Related Words cogitation, contemplation, meditation, pondering, rumination; introspection; agonizing, hesitation, indecision; premeditation
end quote:
from here
http://www.merriam-webster.com/thesaurus/thought
Yes, but it’s made when I’m awake not while I’m asleep.
Do you ever wake up in the morning and realize you made a decision while you were asleep?
I did not think so
peace
It’s not an excuse it’s an observation.
And I don’t take it for granted any more than I take any IC system for granted. They all demand explanation.
I would argue that all IC systems are inaccessible by step by step algorithmic means (like darwinian evolution) but that is another topic.
IIT is all about exploring this irreducible relationship and how it arises. The interaction between the brain and consciousness is a fascinating topic of live scientific exploration.
But any explanation will include immaterial consciousness and not just the materiel brain
peace
fifth,
If it’s not an excuse, then when are you going to get around to answering the question?
fifth:
Let’s hear your explanation, already.
You should have thought so.
And I think you should read some Freud and some Horney. Those events described in your dictionary definition. They, and many others argue that they DO go on in the unconscious. Maybe you don’t agree, but quoting Webster is not really going to convince those who are quite familiar with it.
I agree with you.
Ive already given my explanation of how decisions work. That is what is the topic of this thread
If instead you are looking for an explanation of how consciousness interacts with the physical brain to make one irreducible mind it is literally the hardest problem in science, philosophy and theology.
I think IIT provides a way forward that is exciting but incomplete.
The finial explanation I would suggest will look something like this.
http://www.christonly.com/yahoo_site_admin/assets/docs/Hypostatic_Union.7524617.pdf
peace
Please don’t assume I’m unfamiliar with this topic. Instead give an argument in your own words as to how the unconscious is capable of doing things that are exclusive to consciousness .
I know you are capable of doing this if such a argument exists and is coherent.
I’m not in any way minimizing the importance of the subconscious or the contribution of Freud and his followers
I’m just pointing out the obvious fact that the subconscious is not conscious.
peace
I agree with keiths — it would be nice to hear what the explanation actually is and not just assert endlessly that you have one.
This is way off topic but the evolution of IC systems is not the issue their origin is.
If an IC system with a unique function exists it did not arise by the removal of parts from a non-irreducibly complex system. That is because the function (being unique) could not exist in a non-irreducibly complex system.
Remember we are discussing the conscious mind
Are you arguing that consciousness awareness arose from non- consciousnesses because the brain lost some of it’s parts?
Peace
You have (at least) two problems:
1) The definitions we’ve been using do not include the concept of “conscious thought”. You’re attempting to force that concept in because otherwise software systems can be demonstrated to “decide” by the criteria of those definitions.
2) Even if we switch to discussing “ffm-decisions”, that doesn’t support your claim that material processes can’t decide. Humans are conscious and there is no evidence that anything but material processes are involved in that behavior.