What is a decision?

Arcatia has stated that before any thought can occur, first there must be a chemical change in the brain.  So if before any decision is made, we first need a chemical change, then it is not really a decision, now is it?  It is merely a response to that chemical change, for which we have no control over.

 

On several occasions keiths has ducked and dodged away from this problem.  Arcatia now seems to want to run away from it, as has every other materialist here on this forum.  About the best you can hope for is some kind of obfuscated rant about what is meaning, what is will, how do we know we know, what’s the epistemological  nature of the epistemology…and on, and on the deflections to anything that could be considered an answer go.  Generally people here pretend that if you stick the suffix “sian” at the end of any name, you have said something profound.

 

So it deserves it own thread.  Let the bullshit answers speak for themselves.  In the end we will see if anyone actually tries to address it.  Its the toughest question for materialists to wiggle out of in my opinion.

165 thoughts on “What is a decision?

  1. Steve:

    Volcan systems are physical.

    Brain systems are physical.

    However, this does not mean both systems will contain the same attributes.

    Exactly. That’s why your argument fails in the following exchange:

    Steve:

    Physical systems cannot weigh, anticipate, deliberate, select because they do not possess intelligence.

    keiths:

    That is an assertion. Please supply evidence and an argument in support of it.

    Steve:

    Bullshit.

    It is self-evident.

    We OBSERVE no weather system, no celesial system, no volcanic system with those attributes.

    Therefore, they do not possess them.

    Not assertion. FACT.

  2. Mung: Do you have any sort of references to where Aristotle discusses what it means to be a person and where he restricts personhood to animals?

    I might have misspoken there.

    “Person” and “personhood” are not Aristotelian terms, or terms of any ancient philosophy. The term “person” originally meant “mask” or “false face,” referring to masks worn by actors in Roman dramas. It was later adapted to the Trinity, where God is One but reveals Himself in three different aspects. Under the pressure of the thought that a human being is ‘created in the image of God’, the idea of human beings as persons then took shape. The idea that human beings are persons was (I would say) secularized in the 16th through 18th centuries, esp as ideas about sovereignty are generalized — e.g. in Locke’s idea that each and every person is sovereign over his own body. (Of course Locke doesn’t think that applies to women, Blacks, slaves, or Native Americans.)

    At this point the notion is firmly lodged in our ethical and political vocabulary, so it seems better to use it (if we can) than avoid it.

    The idea that I want to hold onto here is that it is analytically true that a person is a rational animal, but contingently true that a normal mature human being is a person.

    One could say that my interest in the evolution of rationality is a question about the emergence of personhood within the natural order.

  3. Steve:

    Hypothesis: Intelligence is responsible for the difference in attritubes between volcanic systems and brain systems. One contains it. The other does not.

    Steve, just before:

    Brain systems are physical.

    Okay. So Steve says that brain systems are physical systems exhibiting intelligence. Meanwhile, Steve says that physical systems cannot exhibit intelligence:

    Physical systems cannot weigh, anticipate, deliberate, select because they do not possess intelligence.

    Which is it, Steve?

  4. Before you fight it out with yourself, I have a question: Are you a split-brain patient, by any chance?

  5. I posed some questions to fifth on the What is a decision in phoodoo world thread:

    fifth,

    I think you need to spell out what you are looking for here. I for one have no clue what would satisfy you short of a decision algorithm.

    An explanation. We’re asking for an explanation, not an algorithm.

    You could start by addressing a few problems that you’ve been sweeping under the rug — problems that the physicalist handles with ease:

    1) How does the immaterial soul get information from the physical world in order to make decisions? (Hint — “revelation” is not an acceptable answer.)

    2) How does the immaterial soul represent and manipulate information in the process of making decisions? (Not an algorithm — a description.)

    3) How does the immaterial soul, having made a decision, get the physical body to do its bidding?

    Here’s how I, as a physicalist, answer the equivalent questions:

    1) No problem. The sense organs transduce sensory stimuli into nerve impulses, which make their way to the physical brain.

    2) No problem. The brain represents information physically and processes it physically. Computers do the same thing, although the representation and processing obviously differ from those used by brains.

    3) No problem. The physical brain sends out nerve impulses that cause the physical body to respond.

  6. keiths,

    It seems all you have said is, it just does it.

    You have said nothing at all about how or why one choice is selected over another. THAT is what a decision is.

  7. Responding to keiths’s criticism of my non-phyiscalist use of Dennett:

    I like your example of Victoria, and I quite agree that the intentional stance is not sufficient to explain all of human behavior. (Did I say I thought otherwise?) My resistance to your position is where you want to say that reasons are a kind of cause, whereas I want to say that there is a point to distinguishing between reasons and causes within a broadly construed naturalism.

    The conceptual space I want to retain for talking about reasons lies in how we justify, give evidence for, deliberate individually and collectively, decide what is best in light of our conception of the good, and so on.

    I don’t see any hope for the idea that these fundamentally normative concepts — concepts that are deeply tied up with agency, meaning, and experience — could be taken over by any stance that refers to objects and ascribes properties to them in terms of measurements.

    Maybe Laplace’s demon could do it, or an omniscient God, but there is room for neither in my version of naturalism.

    Though there are instances where the predictive utility of the intentional stance fails us — a brain tumor causing irrational behavior is one kind of case, bad emotional management due to poor sleep or low blood sugar is another — what I am resisting is the thought that the objects characterized by taking a design stance are more real than the objects characterized by taking an intentional stance.

    They are just different stances, equally good at doing the only thing that stances can do: give us an embodied strategy for keeping track of salient real patterns. The fact that one stance sometimes breaks down (as per your example) doesn’t mean that the design stance or the physical stance captures a more basic or fundamental “level” of reality; it just means that different stances have different purposes.

  8. keiths: The brain represents information physically and processes it physically.

    If it’s a representation of information it’s not the information itself. And if all any other physical system does is represent information (like your computer), then information must be non-physical.

  9. keiths: The physical brain sends out nerve impulses that cause the physical body to respond.

    You don’t know that the body is not deciding to respond to the nerve impulses.

  10. Mung: If it’s a representation of information it’s not the information itself. And if all any other physical system does is represent information (like your computer), then information must be non-physical.

    That doesn’t follow. Information is structure or organization, sure, but it doesn’t follow that what is structured or organized must itself be “non-physical”. You want to take a way of describing something — as its information — and treat that as ontologically separable from the thing being described.

  11. Above, keiths and I were talking about the relation between the personal stance and the subpersonal stance. He urged the idea that the subpersonal stance can explain pathologies that the personal stance can only describe.

    I do think he’s right about that. I just don’t think it shows that the subpersonal stance is a better approximation of objective reality than the personal stance is, because the personal stance is not an approximation of objective reality at all.

    The underlying reason why Churchland is wrong about eliminative materialism is that the framework of beliefs and decision is not in the first instance an explanatory framework, hence replaceable by a superior explanatory framework. It is a normative framework, for specifying how persons ought to be treated. That’s where the deliberative and predictive roles of that framework have their proper function.

    Put otherwise, eliminative materialism (a la Churchland) and even Dennett’s plurality of stances mischaracterize the fundamentally normative role of the personal or intentional stance. This conflation of normative discourse and descriptive-explanatory discourse is but a recent version of an old error, the naturalistic fallacy.

    And saying that is quite consistent with saying that the intentional stance has explanatory inadequacies.

    In related news, here’s a blog post that summarizes recent work on how to explain some psychopathologies in terms of malfunctions in predictive processing. It links the PP model to specific neurotransmitters, which I haven’t seen before.

  12. KN,

    Above, keiths and I were talking about the relation between the personal stance and the subpersonal stance. He urged the idea that the subpersonal stance can explain pathologies that the personal stance can only describe.

    Not just explain, but predict. The predictions of the physical stance succeed, in principle, where the predictions of the intentional stance fail. In my recent example, the intentional stance fails to predict that Victoria will gamble away her life savings. The physical stance does predict it (in principle) based on the existence of a tumor growing in her frontal cortex.

    It’s asymmetric: The physical stance can predict everything that the intentional stance can, but not vice-versa. The intentional stance is less accurate.

    We retain it, though, for a very important reason: it’s far more tractable than the physical stance. The tradeoff is between accuracy and usability.

    I do think he’s right about that. I just don’t think it shows that the subpersonal stance is a better approximation of objective reality than the personal stance is, because the personal stance is not an approximation of objective reality at all.

    Seriously?

    The underlying reason why Churchland is wrong about eliminative materialism is that the framework of beliefs and decision is not in the first instance an explanatory framework, hence replaceable by a superior explanatory framework. It is a normative framework, for specifying how persons ought to be treated. That’s where the deliberative and predictive roles of that framework have their proper function.

    That’s simply not true. The intentional stance is an explanatory and predictive framework, not a normative one. The fact that a person can be profitably modeled as an intentional system does not in any way dictate how that person should be treated. Jailers and torturers use the intentional stance as surely as psychologists and other healing professionals do.

    Dennett offers an example in which the intentional stance is usefully applied to a chess-playing computer. Do you really think that doing so imposes norms on us regarding the treatment of such computers?

    Put otherwise, eliminative materialism (a la Churchland) and even Dennett’s plurality of stances mischaracterize the fundamentally normative role of the personal or intentional stance. This conflation of normative discourse and descriptive-explanatory discourse is but a recent version of an old error, the naturalistic fallacy.

    The intentional stance is not normative, so it isn’t fallacious to treat it and the physical stance as alternative descriptions of the same underlying reality.

  13. KN,

    I like your example of Victoria, and I quite agree that the intentional stance is not sufficient to explain all of human behavior. (Did I say I thought otherwise?) My resistance to your position is where you want to say that reasons are a kind of cause, whereas I want to say that there is a point to distinguishing between reasons and causes within a broadly construed naturalism.

    There is a point in distinguishing an agent’s reasons from other types of causes, but no reason I can see for trying to distinguish reasons from causes, since reasons are causes.

    If I’m given a choice between chocolate and vanilla, and I choose chocolate because I hate vanilla, then in what sense is my reason not a cause?

    I don’t see any hope for the idea that these fundamentally normative concepts — concepts that are deeply tied up with agency, meaning, and experience — could be taken over by any stance that refers to objects and ascribes properties to them in terms of measurements.

    Maybe Laplace’s demon could do it, or an omniscient God, but there is room for neither in my version of naturalism.

    Don’t conflate the practical issues with the in-principle issues. The intentional stance is indispensable in practical terms, but that is because the physical stance is too unwieldy to be useful in many cases, not because the physical stance is incapable in principle of dealing with scenarios that are usefully modeled at the intentional level.

    Though there are instances where the predictive utility of the intentional stance fails us — a brain tumor causing irrational behavior is one kind of case, bad emotional management due to poor sleep or low blood sugar is another — what I am resisting is the thought that the objects characterized by taking a design stance are more real than the objects characterized by taking an intentional stance.

    I’m not saying that the objects invoked by the lower-level stances are more real than those invoked by the intentional stance. They’re just different ways of looking at and describing the same underlying reality. The differences are in simplicity, usefulness, and accuracy, not in “realness”. The intentional stance is supremely useful because of its relative simplicity, but it sacrifices accuracy in exchange for that utility.

  14. Kantian Naturalist: That doesn’t follow. Information is structure or organization, sure, but it doesn’t follow that what is structured or organized must itself be “non-physical”.You want to take a way of describing something — as its information — and treat that as ontologically separable from the thing being described.

    I was thinking about this over the weekend. I do a lot of bird watching (well…wildlife watching actually) and was thinking about how identify given organisms.

    I guess I’m not entirely sure what some people mean by information. To me, feathers themselves are information. Seeing feathers pretty much immediately tells me that the organism I’m looking at a bird. The color, contrast, shape, markings, and use further narrow down the type of bird. But it’s the feathers themselves and characteristic of those feathers (or veins, color patter, movement, and shape if it’s an insect) are the information it seems to me. Is that not what information is?

  15. KN,

    Just to drive a couple of my earlier points home — a) that the intentional stance is predictive and explanatory, not normative, and b) that the lower-level descriptions are more accurate than the higher-level ones — here is a quote from Dennett:

    An even riskier and swifter stance is the intentional stance, a subspecies of the design stance, in which the designed thing is treated as an agent of sorts, with beliefs and desires and enough rationality to do what it ought to do given those beliefs and desires. An alarm clock is so simple that this fanciful anthropomorphism is, strictly speaking, unnecessary for our understanding of why it does what it does, but adoption of the intentional stance is more useful—indeed, well-nigh obligatory—when the artifact in question is much more complicated than an alarm clock. Consider chess-playing computers, which all succumb neatly to the same simple strategy of interpretation: just think of them as rational agents who want to win, and who know the rules and principles of chess and the positions of the pieces on the board. Instantly your problem of predicting and interpreting their behavior is made vastly easier than it would be if you tried to use the physical or the design stance. At any moment in the chess game, simply look at the chessboard and draw up a list of all the legal moves available to the computer when its turn to play comes up (there will usually be several dozen candidates). Now rank the legal moves from best (wisest, most rational) to worst (stupidest, most self-defeating), and make your prediction: the computer will make the best move. You may well not be sure what the best move is (the computer may ‘appreciate’ the situation better than you do!), but you can almost always eliminate all but four or five candidate moves, which still gives you tremendous predictive leverage. You could improve on this leverage and predict in advance exactly which move the computer will make—at a tremendous cost of time and effort—by falling back to the design stance and considering the millions of lines of computer code that you can calculate will be streaming through the CPU of the computer after you make your move, and this would be much, much easier than falling all the way back to the physical stance and calculating the flow of electrons that result from pressing the computer’s keys. But in many situations, especially when the best move for the computer to make is so obvious it counts as a ‘forced’ move, you can predict its move with well-nigh perfect accuracy without all the effort of either the design stance or the physical stance.

Leave a Reply