Clinical ethics and materialism

In a variant of the hoary old ‘ungrounded morality’ question, Barry Arrington has a post up at Uncommon Descent which ponders how a ‘materialist’ could in all conscience take a position as clinical ethicist, if he does not believe that there is an ultimate ‘right’ or ‘wrong’ answer. I think this betrays a fundamental misunderstanding of clinical ethics. In contrast to daily usage, ethics here is not a synonym for morality.

I can understand how a theist who believes in the objective reality of ethical norms could apply for such a position in good faith. By definition he believes certain actions are really wrong and other actions are really right, and therefore he often has something meaningful to say.

My question is how could a materialist apply for such a position in good faith? After all, for the materialist there is really no satisfactory answer to Arthur Leff’s “grand sez who” question that we have discussed on these pages before. See here for Philip Johnson’s informative take on the issue.

After all, when pushed to the wall to ground his ethical opinions in anything other than his personal opinion, the materialist ethicist has nothing to say. Why should I pay someone $68,584 to say there is no real ultimate ethical difference between one moral response and another because they must both lead ultimately to the same place – nothingness.

I am not being facetious here. I really do want to know why someone would pay someone to give them the “right answer” when that person asserts that the word “right” is ultimately meaningless.

(The last question is an odd one. You would pay someone to give you the “right answer” so long as they believe that there is such a thing?)

Of course you don’t have to go far into medical ethics before you get to genuine ethical thickets. The interests of a mother versus those of the foetus she carries; the unfortunate fact that there aren’t the resources to give every treatment to everyone; the thorny issues of voluntary euthanasia or ‘do not resuscitate’ decisions; issues raised by fertility treatments; cases such as the recent removal from hospital of Aysha King; the role of a patient’s own beliefs. There aren’t many right answers, when you get beyond the obvious things that you don’t need to pay someone to set guidelines for.

It is a bizarre argument to regard moral relativism as a bar to this job. A moral absolutist may believe that blood transfusion is wrong, that faith in the lord is the way to get better, that embryos should never be formed outside a uterus, or some other such faith-based notion. And they have to persuade others of different, or no, faith that this decision is indeed what objective morality dictates, and whatever their own views on morality they must accept that. So I don’t agree that the ‘grounding’ of an atheist’s personal moral principles has any bearing on their candidacy.

441 thoughts on “Clinical ethics and materialism

  1. I still have no idea what he means, actually. His views on these matters are superior to the Taliban’s but somehow that doesn’t just mean that he prefers them. Something is immoral just in case it “violates his moral axioms” but it’s not personal relativism.

    No idea.

  2. walto,

    {Deleted, because I’d only notices your second response before I posted that you wouldn’t answer my question}

    Please don’t delete your comments. As Alan points out, comments are sacrosanct here at TSZ.

    It’s fine to add an ETA at the beginning of your comment indicating that it no longer applies, but don’t delete it.

  3. walto,

    If you don’t understand my position, don’t just keep repeating that you have “no idea” what it is. Engage my arguments.

    Here’s one I’d like you to address:

    The causal chain is the same. Both of us disapprove of certain actions because we feel they are immoral. It’s just that you take the additional step of claiming that your feelings are a (mostly) accurate reflection of objective morality, while I think that notion is completely unjustified.

    To be absolutely explicit, the ‘algorithm’ is:

    0. Assume that your feelings are a mostly accurate indicator of objective morality.
    1. Ask yourself, “do I feel that X is immoral?”
    2. If the answer is “yes”, then disapprove of X.

    I reject step 0 while you don’t, but otherwise the process is the same.

    Do you agree? If not, where precisely does my argument go wrong?

  4. I wouldn’t use “assume” and I’d probably add some qualifications about whether I find other possible actions more immoral, etc., but basically that’s my view, yes.

    Now let’s look at your position. As you reject step O, but get 2 from 1, the question you can’t seem to answer is whether, for you 2 actually means anything other than 1, and, if so, how it could. Neither Bruce nor I can figure this out.

  5. keiths:
    walto,

    Please don’t delete your comments.As Alan points out, comments are sacrosanct here at TSZ.

    It’s fine to add an ETA at the beginning of your comment indicating that it no longer applies, but don’t delete it.

    You know, you’re very bossy and quick to moderate–especially for someone who claims to dislike active moderation. I hope you’re not married.

  6. One problem I’ve had in discussing morality with keiths is that he and I use “objective” and “subjective” in pretty different ways. He seems to use “objective” the same way that I use “absolute”, maybe? I can’t quite put my finger on it. Or perhaps he thinks that only facts can be objective, and values can’t be? I don’t know.

    A closely related problem to this often comes up in my attempts to teach introduction to ethics to my students. All but the most dyed-in-the-wool social constructivists will accept that there are ways of verifying objective truths, or claims about objective reality. (They insist on using “prove” when talking about scientific theories, but that’s a mere quibble.) They do this, I think, because they have at least some vague intuitions about how this might done — say, by doing an experiment or conducting a rigorous observation in the field.

    But they have no intuitions at all as to how any claim about objective values could be verified, so the very idea seems baffling to them — as baffling as Platonic realism about abstracta or Berkeleyian anti-realism about physical objects. And when I realized this, it occurred to me that I also don’t have any intuitions about how claims about objective values could possibly be verified!

    The very best I can do is appeal to something like Nussbaum and Sen’s “capabilities approach,” and say that the facts about the conditions of human psychological and social development make moral judgments true or false to the extent that either

    (a) those judgments are part of a larger pattern of discursive practices and bodily habits that, taken together, tend to promote or hinder human flourishing;

    or
    (b) applying that discursive practice to the concrete situation, in the act of making a judgment about that situation, will promote or hinder the flourishing of the particular person to whom the judgment is being applied.

    and that’s a seemingly weird version of “objectivity” for values to have.

  7. walto,

    If you think admonishment from another commenter is “moderation”, then I disagree. I would never ask that your comments be placed in moderation, Guanoed, deleted, edited, or moderated in any other way.

    This is an etiquette issue. The moment after you click ‘Post Comment’, someone may refresh his or her browser, see your comment, and start responding to it. When you delete your comment, you leave their response dangling. It’s a disservice to both the respondent and to the other readers, who wonder what is being responded to.

  8. keiths:
    For example, it is possible to consistently hold both that punishment is immoral and that other acts are immoral, too.If punishment is immoral then no one deserves to be punished, yet that doesn’t imply that murder becomes moral.

    Keith:
    That bothered me when I read it but I could not think of how to express my discomfort until now.

    I think punishment is just another word for negative consequences. I don’t think punishment can be said to be immoral without providing more details on the nature of the punishment and the situation in which one proposes to apply such a punishment.

    Let me put it this way.

    1. Agents can have moral responsibility (though not “absolute” = “ultimate” responsibility).

    2. If an agent violates a norm in a situation where that limited moral responsibility applies to the agent’s actions, then

    3. the agent deserves appropriate punishment. I am using the weasel word “appropriate” as a shortform for “the negative consequences that should apply in this situation for this agent”.

    As best I can tell, you accept 1. But you do not accept that 3 then follows from 2. Possibly you would accept 3 with a different verb from “deserves”? Or if not, how could punishment be involved in the situation covered by 1 and 2?

  9. KN,

    And when I realized this, it occurred to me that I also don’t have any intuitions about how claims about objective values could possibly be verified!

    You might be on the verge of becoming a moral subjectivist. 🙂

    ETA: I’ll respond to the rest of your comment later, but I’m in the middle of a reply to walto right now.

  10. Kantian Naturalist:

    They do this, I think, because they have at least some vague intuitions about how this might done — say, by doing an experiment or conducting a rigorous observation in the field.

    As I am sure you know, there is much more than that to making scientific theories objective. A set of norms must be applied to the theories: peer review, generation of novel predictions, evaluation against competing explanations, consistency with other science, proper control and analysis of experiments, and so on.

    Further, if someone claims that these norms are just things science uses to justify itself and do not make it objective, then one can add that the norms themselves are what permit successful replication and even more importantly, the technology we all use.

    Scientific theories are objective because of the public and pragmatic approaches to confirm the norms the theories conform to.

    (a) those judgments are part of a larger pattern of discursive practices and bodily habits that, taken together, tend to promote or hinder human flourishing;

    I think that one can flesh out what the nature of that “pattern of discursive practices” is a way analogous to the scientific process and then make the further claim that objectivity follows in the same way that science is objective.

    I give more detail on that in previous posts, so I won’t repeat. Most of it comes from my understanding of Kitcher.

    But, as I did mention, I don’t think one can use “human flourishing” directly, as both terms really are subject to further definition depending on the norms of the moral framework (eg the status of fetuses and people in a permanent vegetative state).

  11. walto,

    Now let’s look at your position. As you reject step O, but get 2 from 1, the question you can’t seem to answer is whether, for you 2 actually means anything other than 1, and, if so, how it could.

    The “question [I] can’t seem to answer” is one I keep answering!

    I’ll answer it again, but please acknowledge and ponder my response this time.

    Here are the steps again:

    To be absolutely explicit, the ‘algorithm’ is:

    0. Assume that your feelings are a mostly accurate indicator of objective morality.
    1. Ask yourself, “do I feel that X is immoral?”
    2. If the answer is “yes”, then disapprove of X.

    I reject step 0 while you don’t, but otherwise the process is the same.

    Yes, step 2 means something other than step 1, for both you and me. Step 2 follows from step 1, but it is not identical to step 1. Otherwise I would have left it out!

    Note that just as I can’t get to step 2 without step 1, neither can you. Both of us depend on our feelings to decide what is immoral. You just make the additional assumption described in step 0 — that your feelings magically tap into objective morality — which seems wholly unjustified to me.

    Also note that while step 2 is noncognitive, step 1 isn’t always. I wrote earlier:

    Again, I think morality is subjective, so when I say that X is morally wrong, I mean that it violates my (subjective) moral axioms, or that it violates an implication of those axioms. I choose my axioms because they feel right. There’s no other justification. That’s why they’re subjective!

    To decide whether X is immoral, I first need to figure out whether it violates my moral axioms or their implications. Once I have determined that it is (subjectively) immoral, then I morally disapprove of it.

    Immorality is prior to disapproval. I can’t answer the “do I disapprove?” question without first answering the “is it (subjectively) immoral?” question.

    [Emphasis added]

    To determine whether an action X conflicts with the implications of one’s moral axioms can be a complicated cognitive task. Deciding to disapprove of X after you have determined that it is immoral is not. The two steps are not the same.

  12. BruceS: As I am sure you know, there is much more than that to making scientific theories objective. A set of norms must be applied to the theories: peer review, generation of novel predictions, evaluation against competing explanations, consistency with other science, proper control and analysis of experiments, and so on.

    Further, if someone claims that these norms are just things science uses to justify itself and do not make it objective, then one can add that the norms themselves are what permit successful replication and even more importantly, the technology we all use.

    Scientific theories are objective because of the public and pragmatic approaches to confirm the norms the theories conform to.

    Oh, of course — I agree with all that! I was only raising the suggestion that my students have at east a vague intuition of what verification of objective claims amounts to with regard to facts, but no corresponding intuition of what verification of objective claims amounts to with regard to values, and that’s part of why relativism comes naturally to them.

  13. Bruce, to KN:

    But, as I did mention, I don’t think one can use “human flourishing” directly, as both terms really are subject to further definition depending on the norms of the moral framework (eg the status of fetuses and people in a permanent vegetative state).

    That’s right, you have to drill down further. Hitler presumably would have defined “human flourishing” in terms of Aryan dominance, for example, while we would reject that.

    My contention is that you can’t drill down forever. You have to reach a stopping point, and the stopping point will be at your moral axioms. But moral axioms are starting assumptions, just as geometric axioms are. They are assumed, not justified.

    If they can’t be justified, how do we know that they are objectively true? I say that we can’t know that, and that we must therefore regard them as subjective. Walto says that our consciences tell us that they are objective, and that we should trust our consciences on this matter.

    I don’t see why, since we have no reliable way of determining or demonstrating that our consciences are reliable indicators of objective morality.

  14. keiths,

    I’m sorry, but that’s a long post that’s not helpful at all. The entailments issue just adds a pointless complication. Just take one of your moral axioms. How can your approval of it be prior to its goodness if its goodness consists in it being among your axioms? What does ‘X is good mean?’ to you if not ‘I feel moral approval towards X’? Or, as Bruce has asked twice, just what is your metaethical position (other than not liking objectivism or cultural relativism)?

  15. KN,

    One problem I’ve had in discussing morality with keiths is that he and I use “objective” and “subjective” in pretty different ways.

    I think our main disagreement has been over your idea that moral systems can be objectively judged as better or worse based on how well they promote “human flourishing”.

    Here’s a comment of yours from this thread:

    In a roughly similar fashion, an ethical system is held to be objectively good (or bad) to the extent that it is more conducive to the cultivation and flourishing of human capacities than other systems.

    And here’s one from a year ago that says essentially the same thing:

    I would still want to say here that there are ‘matters of fact’ that make some moral judgments better than others — namely, whether the moral judgment belongs to a family of moral judgments and moral practices that tend to promote human flourishing.

    There have been others as well.

    My objection is that even if “human flourishing” is well-defined, the choice of “the promotion of human flourishing is the defining goal of morality” as your criterion remains subjective. If the choice of criterion is subjective, then your evaluation of a moral system against that criterion will yield an ultimately subjective result.

    Now, it’s important to point out that once that axiom has been selected and a suitable metric defined, the rest of the evaluation can proceed more or less objectively.

    In other words, one moral system may be said to be objectively better at promoting human flourishing than another, given the right definition and metric, but it can’t be said to be objectively better, full stop. Since the choice of criterion is ultimately subjective, so is the final judgment rendered by that criterion.

    You could just as easily replace your axiom with another: “The promotion of penguin flourishing is the defining goal of morality”. In that case a different ranking of moral systems will result.

    If the two criteria yield different rankings among the competing moral systems, then it is impossible to say that one of the systems is objectively the best, full stop. The comparison is relative to a subjectively chosen axiom, which means that the final result is subjective.

  16. walto,

    I’m sorry, but that’s a long post that’s not helpful at all. The entailments issue just adds a pointless complication. Just take one of your moral axioms. How can your approval of it be prior to its goodness if its goodness consists in it being among your axioms?

    First, we’ve been talking about how we decide to approve or disapprove of actions, not axioms. Second, I haven’t claimed that approval comes prior to an assessment of an action’s “goodness”. I’ve been stating the exact opposite! For example:

    To decide whether X is immoral, I first need to figure out whether it violates my moral axioms or their implications. Once I have determined that it is (subjectively) immoral, then I morally disapprove of it.

    There’s a reason step 2 comes after step 1!

    Speaking of step 2 and step 1, you’ve been challenging my claim that they are distinct. I presented an argument showing that they are in fact distinct, because step 2 is noncognitive while step 1 can have a cognitive component (and often a substantial one!).

    Do you concede that the steps are distinct? If not, please identify exactly where my argument goes wrong and why you think so.

  17. walto,

    In my comment, click on the words “an argument”. They’re in blue, which indicates that they are a hypertext link. When you click there, your browser will take you to the comment that contains my argument.

    You can’t miss it. It’s the part where I explain that Step 2 cannot be the same as step 1, because step 2 is noncognitive while step 1 can contain a cognitive component. Therefore, they can’t be identical.

  18. I clicked. I still have no idea what argument you’re referring to. What are you arguing for? What are the premises?

    And what do you think “X is good” means?

  19. Oh, and what’s “non-cognitive” about whatever “step” you are saying is non-cognitive?

    What, exactly, are you trying to say?

  20. I clicked. I still have no idea what argument you’re referring to. What are you arguing for? What are the premises?

    Poor walto. No rebuttal, eh?

    I’m content to leave it there. Think about it, and if at some point you come up with a rebuttal, come back and share it with us.

    ETA: Added quote of walto’s comment before my response.

  21. Take a concrete example, say, being nice to the nice. Let’s say you think being nice to the nice is a good way to be. Does that simply mean you have certain feelings about being nice to the nice, or, If not, what DOES it mean?

  22. Also, what do you mean by “cognitive” and “non-cognitive”? You are saying, I take it, that some proposition or other is “cognitive” and another one is not. But I don’t know what you mean. What is a non-cognitive proposition?

  23. If you think those questions are too hard or too confusing or unclear or stupid or you simply don’t feel like answering them, or you think you already have, or whatever, I have a suggestion. Just look at the link Bruce supplied and tell us what your position actually is. Or, if you think it’s importantly different from any of the metaethical positions on that site, just indicate how.

    All we’re trying to do is figure out what your position is, what you think “X is wrong” means. I’m not sure why that bothers you so much.

  24. walto:
    If you think those questions are too hard or too confusing or unclear or stupid or you simply don’t feel like answering them, or you think you already have, or whatever, I have a suggestion. Just look at the link Bruce supplied and tell us what your position actually is.Or, if you think it’s importantly different from any of the metaethical positions on that site, just indicate how.

    All we’re trying to do is figure out what your position is, what you think “X is wrong” means. I’m not sure why that bothers you so much.

    ETA: Or you could just say you don’t know or you’re not sure or whatever. That would be OK too. None of this requires any “arguments” regarding “cognitive” or “non-cognitive” “steps.” It’s very simple. You could say, e.g., “I think my view is closest to blah blah, although I don’t quite agree with it about blah blah blah. Or you could say, “I have to think more about it–I’m not sure where my position falls exactly, because blah blah” Any of that would probably go farther in answering this simple question than linking something you’ve already posted because, obviously, I, at least, have not been able to understand how anything you’ve said so far has been responsive.

  25. Bruce,

    Another way to ask the question: what sort of arguments would you use to convince someone that “stoning was wrong”. It seems to me that all you could do is appeal a series of arguments of the form “Stoning is wrong because it involves x and Keith thinks/feels x is wrong”.

    Not at all. There are lots of possibilities. You could demonstrate that stoning conflicts with his moral axioms (or with an implication of his moral axioms). You could show that his moral axioms are inconsistent, in hopes that the revised axioms will forbid stoning. You could argue that forgiveness is a virtue. You could point out that the accused might turn out to be innocent, in which case stoning would be a grave injustice. You could try to stimulate his sense of empathy in various ways, so that he would see the horror of stoning and change his moral axioms. You could even try something as drastic as pretending that you are about to stone him to death. If a person is buried up to his neck, expecting to die, watching as angry-looking people pick up large stones, waiting for the first one to bash his skull, he might suddenly discover an ability to empathize with stoning victims.

    Now, what if you were in the same situation, trying to persuade the man that stoning was immoral? What valid argument could you make that isn’t available to me as a moral subjectivist?

  26. KN,

    A closely related problem to this often comes up in my attempts to teach introduction to ethics to my students. All but the most dyed-in-the-wool social constructivists will accept that there are ways of verifying objective truths, or claims about objective reality. (They insist on using “prove” when talking about scientific theories, but that’s a mere quibble.) They do this, I think, because they have at least some vague intuitions about how this might done — say, by doing an experiment or conducting a rigorous observation in the field.

    But they have no intuitions at all as to how any claim about objective values could be verified, so the very idea seems baffling to them — as baffling as Platonic realism about abstracta or Berkeleyian anti-realism about physical objects. And when I realized this, it occurred to me that I also don’t have any intuitions about how claims about objective values could possibly be verified!

    Yes, and that’s precisely why I am a moral subjectivist. No one has ever presented me with good reasons for accepting claims about objective morality, and I can’t think of any myself.

    We can’t trust our consciences because we know they’re fallible and we have no independent way of gauging their reliability. We can’t use other humans as references because there might be systematic defects in our consciences that affect all normal humans, just as some visual illusions do.

    The fact that moral intuitions can be strong or emotionally salient certainly doesn’t guarantee their reliability, and we have no evolutionary reasons for expecting our consciences to be “tuned” to objective morality (if such a thing even exists), because there is no selective pressure for such “tuning”.

    Some folks argue that objective values are somehow different from facts, and that they therefore don’t need to be verified in the same way that a fact would. However, I’ve never heard a good explanation of how such a non-factual status would make our intuitions about values objectively reliable.

    The bottom line is that there’s no good evidence that objective values exist, and even if they did, there’s no reason to think that our consciences would be “tuned” to them. The kind of morality that actually matters in the world is subjective morality.

  27. keiths:

    Not at all.There are lots of possibilities
    […]

    ?What valid argument could you make that isn’t available to me as a moral subjectivist?

    Keith:
    I am not sure how your metaethical position fits into philosophers categories which I would need to understand to comment on how my source of arguments versus yours.

    A lot of philosophical metaethics revolves around whether moral exclamations are statements with truth content or are something else (eg, imperatives, like “Don’t stone people as punishment!”). Since your argument approach involves convincing people of true/false (or contradictory) properties of their statements, it seems you accept that moral statements (like “stoning is a wrong form of punishment”) can have truth values. So where do these truth values come from?

    You mention axioms a lot. So maybe that means moral statements are only true/false/contradictory with respect to a personal set of axioms. That would be personal relativism if we leave it there. But then we could ask, do these axioms themselves have truth/false values beyond what a person wants to believe? If so, are their ways of supporting certain axioms above others? Or is it up to anyone to accept whatever (consistent) set they choose?

    You also mention empathy and appealing to someone’s ability to put his or her self in the accused persons place. But these feelings vary from person to person and so simply appealing to any one persons feelings would be lead to relativism, I think. Unless you are saying most people share empathetic attitudes for biological/evolutionary reasons and it is those commonly shared attitudes that determine moral correctness. If you believe that, then I would say you are a naturalist when it comes to determining sources of moral truth.

    It also depends on what happens if you fail to convince the person of your point of view. If you say he or she is wrong according to my moral standards but not according to his or hers, you are a relativist. But if you believe he or she is still wrong without qualification, then you are not a relativist. But if you take the unqualified approach, how do you know you are right and the other is wrong?

    On your question to me: I don’t know whether my types of arguments differ from yours because I don’t understand your source of arguments yet.

    However, as I’ve mentioned before, I don’t restrict myself to purely deductive arguments; I think that ampliative arguments are also allowable, since they are in science as well. So I would want to make sure that by “valid” did not mean you were expecting deductive arguments.

  28. keiths:
    If they can’t be justified, how do we know that they are objectively true?

    Keith:
    Do you think science is objectively true? If so, why?

    I’ve tried to be careful and as say I see properly derived moral frameworks as being “as objective as science”, never simply “objective”. Of course, I may have been careless in some of my posts and not included the whole phrase. But that was just carelessness.

    See my other comment regarding your use of axioms and whether or not that implies metaethical relativism.

  29. keiths:

    If the two criteria yield different rankings among the competing moral systems, then it is impossible to say that one of the systems is objectively the best,

    Keith:
    I agree that pluralism among ethical frameworks exists. But that does not mean all frameworks are equal. It just means there is only a partial ordering.

    Consider science: right now there are three approaches in neuroscience and psychology to mental content and concepts: semantic/lingistic (GOFAI), connectionism, perceptual. All have supporters and active research programs which conform to scientific norms. But there are inconsistencies between their theoretical frameworks.

    Does that mean they are not objective? No, because the objectivity comes from the process that all are following.

    Does that mean that all theories are equal? No, because others like dualism and behaviorism have been rejected by the scientific norms and process. Hence all of the first three are better than those two.

    I claim similar concepts can apply to moral frameworks.

  30. Kantian Naturalist: my students have at east a vague intuition of what verification of objective claims amounts to with regard to facts, but no corresponding intuition of what verification of objective claims amounts to with regard to values

    It was your doubts about objectivity that concerned me, KN! If you are interested in more on what I think on that, see my recent posts to Keith.

    Off topic: I notice from a recent posting by Dr. Shallit that your man Dewey got a reference in an unexpected place.

    Dr. Shallit was pointing out to an ID proponent (once again) that the usual, mathematical definition of information had nothing to do with meaning. But then he said there was some recent work in this paper (pdf) which attempts to provide a mathematical approach to meaning.

    The paper tries to operationalize meaning by referring to the goals of the agents who are trying to communicate. And for precedents to this approach, the authors mention two philosophers, Dewey and Wittgenstein.

    Now, being IT people, their idea of a goal with meaning is rather mundane: getting a printer server to print something for a client PC. We’ll still leave swamp people and twin earths to the philosophers.

  31. I agree, Bruce, that the same questions have just been shifted over to keiths’ axioms. That’s why the talk about consistency is unhelpful. I also note the remark to the effect that we could try to convince someone that ‘forgiveness is a virtue’. That is either axiomatic or it isn’t. If it is, we want to know its basis, and if it isn’t , we want to know the bases for the axioms on which it depend.

    While lying in bed last night I think I sussed out the point of keiths’ remarks about something being cognitive and something else being non-cognitive. I think it was an attempt to show a defect in emotivism, the claim that the moral statement just is the feeling of disapprobation. It’s akin, I think, to a critic of that view saying, ‘Wait a minute, a toddler might have the feelings but not have sufficient conceptual equipment to undertand the “it’s wrong” claim!’ I take it the emotivist might agree that there is a developmental requirement for making/understanding moral statements or something like that (if that is indeed keiths’ argument–with the stuff about axioms stripped out). In any case, maybe the whole post supplies some evidence that keiths is more comfortable with subjectivism than with emotivism.

  32. I note, too, that keiths’ last couple of posts reiterate the (what seems to me clearly false) claim that factual statements are somehow different from value statements in being dispositively confirmable. As I’ve said repeatedly, there’s no obvious difference between the realms on THAT front. The list of things one can do for confirmation may go on longer with facts, because we can use science to help in that arena and not the other, but ultimately both types of claims may seem true based on all evidence available to us at any given time and still not be.

    I agree with his claims that justifications must stop SOMEWHERE, but that’s true of factual knowledge too. Coherence only goes so far in respect of both worlds. But keiths has a verificationist picture of knowledge–if science is involved there is the possibility of some of kind of REAL confirmation present, otherwise not. And he concludes from this that knowledge is available in the area of “objective facts” but not in the area of “objective values.” I’m not sure that picture is actually consistent with agnosticism with respect to objective values, because on the subjective view, values would simply seem to BE subjective values (what else COULD they be?), and, I take it, those can be known with at least as much certainty as any factual claims about the physical world–no matter how much scientific confirmation the latter have.

    But, of course, we’ve been over this stuff countless times, so I’ll stop here.

  33. walto:
    I note, too, that keiths’ last couple of posts

    I am sure Keith understands his position as being consistent and I’d like to understand how he does so.

    The challenge is that I don’t think we are using words the same way, but I don’t think I know all of the details on how we differ. For example, I am still not exactly sure how he uses “deserves”. I also notice the terms “cognitive” and “non-cognitive” have come up in your exchange, but I don’t think these are being used in the meta-ethical sense in all cases, but maybe they are in some.

    OT: If you are in the mood for teaching, what exactly does “dispositively confirmable” mean to a philosopher? I see “disposition” in the SEP and understand the basics as explained there, but I am not sure what it means when paired with “confirmable”.

    ETA: My best guess would be something like “we can observe that behavior by providing the appropriate stimulus and looking for the appropriate response” where, as per my usual habit, I am hiding complications in “appropriate”.

  34. BruceS: Keith:
    Do you think science is objectively true? If so, why?

    Obviously, I am not keiths. But that’s an interesting question.

    It depends on what you mean by “science”. Many scientific statements are objectively true. But I do not say that about scientific theories. I see the best theories as neither true nor false. Roughly speaking, I could apply Al Gore’s famous dictum “there is no controlling legal authority.” As I see it, there are criteria for a scientific theory, but they are pragmatic criteria rather than truth criteria.

    I’ve tried to be careful and as say I see properly derived moral frameworks as being “as objective as science”, never simply “objective”. Of course, I may have been careless in some of my posts and not included the whole phrase. But that was just carelessness.

    I think this does not work on moral issues.

    Compare with language. I do not say that the English language is objectively true. And I do not say that the German language is objectively true. I see those languages as neither true nor false. And I see scientific theories in a similar light.

    We cannot sensibly ask “does the German language correspond to reality”. The language plays a different rule. In some sense, the German language is the system of correspondences that we use to decide whether a particular statement in German corresponds to reality. We can apply a correspondence test to individual statements, but not to the language as a whole.

    Given a statement in German, we can often say almost the same thing with an English statement. We take the German statement, see what it asserts about objective reality, then find an English way of saying the same thing.

    Given a geocentric statement, we can similarly find a heliocentric statement which says about the same thing about objective reality. Heliocentrism and Geocentrism are alternative systems for establish correspondences between linguistic statement and reality. In that way, they are analogous to English and German.

    But I don’t think that applies with moral frameworks. We do not have the same translatability between different moral frameworks. We cannot come to framework-independent agreement. And that’s because there isn’t moral entity analogous to objective reality that we can use to establish translation.

  35. BruceS: I am sure Keith understands his position as being consistent and I’d like to understand how he does so.

    OT:If you are in the mood for teaching, what exactly does “dispositively confirmable” mean to a philosopher?I see “disposition” in the SEP and understand the basics as explained there, but I am not sure what it means when paired with “confirmable”.

    ETA:My best guess would be something like “we can observe that behavior by providing the appropriate stimulus and looking for the appropriate response” where, as per my usual habit, I am hiding complications in “appropriate”.

    All I mean by “dispositively” is what lawyers mean by it–absolutely disposing of the issue. So, something is dispositively confirmed when its confirmation is final, irreproachable, proven.

  36. Neil Rickert: Obviously, I am not keiths.But that’s an interesting question.

    It depends on what you mean by “science”.Many scientific statements are objectively true.But I do not say that about scientific theories.I see the best theories as neither true nor false.Roughly speaking, I could apply Al Gore’s famous dictum “there is no controlling legal authority.”As I see it, there are criteria for a scientific theory, but they are pragmatic criteria rather than truth criteria.

    I think this does not work on moral issues.

    Compare with language.I do not say that the English language is objectively true.And I do not say that the German language is objectively true.I see those languages as neither true nor false.And I see scientific theories in a similar light.

    We cannot sensibly ask “does the German language correspond to reality”.The language plays a different rule.In some sense, the German language is the system of correspondences that we use to decide whether a particular statement in German corresponds to reality.We can apply a correspondence test to individual statements, but not to the language as a whole.

    Given a statement in German, we can often say almost the same thing with an English statement.We take the German statement, see what it asserts about objective reality, then find an English way of saying the same thing.

    Given a geocentric statement, we can similarly find a heliocentric statement which says about the same thing about objective reality.Heliocentrism and Geocentrism are alternative systems for establish correspondences between linguistic statement and reality.In that way, they are analogous to English and German.

    But I don’t think that applies with moral frameworks.We do not have the same translatability between different moral frameworks.We cannot come to framework-independent agreement.And that’s because there isn’t moral entity analogous to objective reality that we can use to establish translation.

    Neil,

    Do you see no important disanalogies between scientific theories and languages? They seem very different to me.

  37. walto: Do you see no important disanalogies between scientific theories and languages? They seem very different to me.

    Of course they are different in many ways. And, to make matters worse, we are not very consistent about how we use “theory.” Still, with respect to questions of truth of theories, I do see them as analogous.

  38. BruceS: 1. Agents can have moral responsibility (though not “absolute” = “ultimate” responsibility).

    2. If an agent violates a norm in a situation where that limited moral responsibility applies to the agent’s actions, then

    3. the agent deserves appropriate punishment. I am using the weasel word “appropriate” as a shortform for “the negative consequences that should apply in this situation for this agent”.

    Just wanted to comment on this bit. I generally agree with it, but I’m not on board with the whole “ultimate responsibility” biz. I’m not sure what that would mean. I take our responsibility for our actions to roughly fall where the courts say they do. That’s what “responsibility” (and “moral responsibility”) means, I think. I don’t think “ultimate (or “absolute”) responsibility” means anything. If I wanted to do X and did do X and there is no weirdness about the various descriptions of X to confuse the case or any kind of Gettier biz going on, then I’m responsible. There may be subtleties about having to know the moral implications, etc. for me to be blameworthy. Lots and lots of trials about all that kind of stuff.

    I mean, are there supposed to have been, e.g., no other causal contributors to my act but my soul, or something when we’re talking about “ultimate responsibility”? What if I had to step over a chair that I didn’t put there? To me, the whole notion is suspect and should be dumped as a relic of some kind of Cartesian theory of mind and action. It has nothing to do with punishment–just, unjust, pointless, sensible, or otherwise.

    ETA: Interestingly, I think that pushing a notion of “ultimate responsibility” on the grounds that it is required for punishment to be “deserved” might put one in the libertarian camp with guys (indeed some here would call doofi) like the dreaded God-fearing Plantinga and his van Inwageny minions. I know YOU don’t require ultimate responsibility for punishment to be deserved, but…….

  39. Neil Rickert:

    It depends on what you mean by “science”.Many scientific statements are objectively true.But I do not say that about scientific theories.I see the best theories as neither true nor false.Roughly speaking, I could apply Al Gore’s famous dictum “there is no controlling legal authority.”As I see it, there are criteria for a scientific theory, but they are pragmatic criteria rather than truth criteria.

    I agree with most of what you say. In fact, I have tried to avoid the word “true” altogether: the phrase “objectively true” is a quote from Keith, I think.

    You are right to say truth is not a part of science. I think it is be part of philosophy of science (when talking about truth in science).

    I also we agree with judge science pragmatically. Kitcher suggests a similar approach for moral frameworks.

    I think we can leave out truth to start and just compare objectivity of the processes of science and of building moral frameworks with respect to pragmatic evaluation. Then one could conclude that if a process of building moral frameworks was sufficiently analogous to the process of science (not to say it would be science, though), then it also deserves to be called objective. There lots more above; I won’t bother to repeat.

    I am not sure I quite follow the language comparison. I suspect it is mostly about truth, which I would put aside for the initial work. Kitcher raises a way to speak about a pragmatic concept of truth, but I’ll leave that for later.

  40. walto: Just wanted to comment on this bit.I generally agree with it, but I’m not on board with the whole “ultimate responsibility” biz.….

    The “utlimate” stuff comes from Dennett, and he brings it up in replying to Strawson’s arguments against compatibilism allowing for moral responsibility. Dennett agrees he has not shown ultimate responsibility, but argues that compatibilists don’t need that to claim that we are morally responsible in some cases.

    I get more details from the Dennett stuff I have if that is of interest to you.

    By the way, in reviewing the post you quote, I notice I was not careful to distinguish metaethical responsibility (“actually deserves” if I understand Keith) from descriptive responsibility. I’ll see if that gets me into trouble.

  41. Neil Rickert: We cannot come to framework-independent agreement.And that’s because there isn’t moral entity analogous to objective reality that we can use to establish translation.

    I missed this on initial read.

    One thing I am looking for from an objective process is being able to say some frameworks are better than others. I want to avoid complete relativism, but I accept pluralism. So I am not aiming for “complete agreement”.

    ETA: Again, I think the translation part relates more to establishing truth. A lot of philosophers of science think science tells us nothing about unobservable reality (you too, I think). And the ones that do think it tells about reality disagree on what aspect of reality it tells us, eg relations versus relata. But almost all agree science is objective, regardless.

    It is the process I am calling objective; the framework would only gain that adjective if it was derived by following the process.

    See this post for a science theory comparison (I left out Dynamic System Theory in that post, I realized later).

  42. BruceS: The “utlimate” stuff comes from Dennett, and he brings it up in replying to Strawson’s arguments against compatibilism allowing for moral responsibility. Dennett agrees he has not shown ultimate responsibility, but argues that compatibilists don’t need that to claim that we are morally responsible in some cases.

    Which Strawson? That doesn’t sound like Peter. Was he replying to Peter’s son Galen (who, IMHO, is wrong about nearly everything)?

  43. Bruce,

    Here’s my point regarding the stoning advocate. You wrote:

    Another way to ask the question: what sort of arguments would you use to convince someone that “stoning was wrong”. It seems to me that all you could do is appeal a series of arguments of the form “Stoning is wrong because it involves x and Keith thinks/feels x is wrong”.

    I don’t think I’m limited to that sort of argument, as I explained earlier.

    But if I were limited to that type of argument, then Ollie the Objectivist would be limited in the same way. He could only say “stoning is wrong because it involves x and Ollie thinks/feels that x is objectively wrong” — unless he could separately justify the claim that x is objectively wrong.

    In either case Stan the Stoner could say “I disagree with your thoughts/feelings”, and proceed to stone someone to death.

    It’s subjective morality in both cases. I think/feel that stoning is wrong, and so does Ollie. From each of our subjective viewpoints, stoning is immoral. It’s just that Ollie thinks his subjective morality also reflects an objective truth: that stoning is objectively immoral.

    Unless Ollie can demonstrate this, his argument carries no more weight with Stan than mine does.

    ETA: This is why I said earlier that subjective morality is the only kind of morality that actually matters in the world.

  44. Bruce,

    Since your argument approach involves convincing people of true/false (or contradictory) properties of their statements, it seems you accept that moral statements (like “stoning is a wrong form of punishment”) can have truth values.

    No, because I don’t think that truth is relative. To me, truth is objective truth, and since we have no way of justifying claims about objective morality, we cannot legitimately assert that stoning is objectively immoral.

    My argument approach isn’t designed to convince people that the statement “stoning is wrong” is true — I have no way of demonstrating that. I’m simply trying to persuade them to regard it (subjectively) as wrong, so that they will no longer support it.

    But then we could ask, do these axioms themselves have truth/false values beyond what a person wants to believe?

    If objective morality actually exists, then our axioms may be true or false. I just don’t see any evidence that objective morality exists, and even if it did, I don’t see how we could access it in order to decide whether our axioms were true or false.

    Unless you are saying most people share empathetic attitudes for biological/evolutionary reasons and it is those commonly shared attitudes that determine moral correctness.

    I think our shared capacity for empathy is important, because that is our best hope for finding a consensus moral system that most humans would be willing to live under, but I don’t think the fact that empathy is a widely shared capacity makes it objectively moral.

    Do you think science is objectively true?

    No. The word ‘science’ has multiple meanings, but none of them could be labeled as “objectively true”, in my opinion. Science as an institution can’t be objectively true or false, because institutions aren’t true or false. Science as a methodology can’t be objectively true, for analogous reasons. Science as a body of claims about the natural world can’t be objectively true, because it includes speculations and contradictory claims on its fringes. It does include objective truths, however.

    I’ve tried to be careful and as say I see properly derived moral frameworks as being “as objective as science”, never simply “objective”.

    This might be the crux of the confusion. I regard science as (ideally) objective, but not objectively true. That is, the goal of science is to discover objective truths, and the best way to do that is to validate them in an objective manner, but that does not mean that science itself is “objectively true”.

    So science can be regarded as “objective” in a couple of senses: 1) when done well, it leads to the discovery of objective truths, and 2) it (ideally) validates those truths in an objective manner.

    Once you’ve subjectively selected your moral axioms, a moral system can proceed objectively to determine whether a given action is moral or immoral (sense 2), but the end result is not objectively true (sense 1).

    “Stoning is morally wrong” is a subjective statement that cannot be established as true or false.

  45. Bruce,

    Let me put it this way.

    1. Agents can have moral responsibility (though not “absolute” = “ultimate” responsibility).

    2. If an agent violates a norm in a situation where that limited moral responsibility applies to the agent’s actions, then

    3. the agent deserves appropriate punishment. I am using the weasel word “appropriate” as a shortform for “the negative consequences that should apply in this situation for this agent”.

    As best I can tell, you accept 1. But you do not accept that 3 then follows from 2.

    That’s right.

    Possibly you would accept 3 with a different verb from “deserves”? Or if not, how could punishment be involved in the situation covered by 1 and 2?

    I don’t think anyone deserves to be punished. Here’s why:

    I take “X deserves to be punished” to mean “the proportionate suffering of X is a good thing in itself, given that X committed an immoral act A.” In other words, X deserves to suffer, whether or not that suffering leads to desirable downstream consequences.

    To justify the desirability of X’s suffering would, to me, require that X be ultimately responsible for act A — at minimum — and even then I’m doubtful, because I regard suffering as intrinsically bad, and adding more suffering to the world doesn’t by itself accomplish anything desirable.

    Where punishment comes into play is in a purely consequentialist sense. I don’t think that even a serial killer deserves to suffer (though my intuition screams otherwise at times!). Any suffering a criminal experiences should be for the sake of downstream consequences, like protecting society or reforming the criminal himself.

  46. Neil,

    Given a statement in German, we can often say almost the same thing with an English statement. We take the German statement, see what it asserts about objective reality, then find an English way of saying the same thing.

    Given a geocentric statement, we can similarly find a heliocentric statement which says about the same thing about objective reality. Heliocentrism and Geocentrism are alternative systems for establish correspondences between linguistic statement and reality. In that way, they are analogous to English and German.

    I disagree.

    The defining statement of heliocentrism is “the planets, including the earth, revolve around the sun”. There is no way to assert that idea in a geocentric system without converting it into a heliocentric one!

    Likewise, “the sun and the planets revolve around the earth” cannot be asserted in a heliocentric system without converting it into a geocentric one.

Leave a Reply