Morality for dummies

Premise:

  • A “bad state” is a state that an organism would want to change.
  • A “good state” is a state that an organism seeks to achieve.

Therefore:

  • A “bad action” is causing an organism to enter a state that they would want to change.
  • A “good action” is helping an organism achieve a state that they don’t want to change.

Unfortunately, sometimes the good state of one organism depends on the bad state of another (or of the same organism at a different time). So for any organism (and we are probably the only ones on this planet at this time) with the capacity to weigh up actions on the basis of cui bono? (and when?), there will be frequent tension between competing claims.  I suggest that our methodology for resolving these claims are what constitutes what we call our “morality”, and that our methods of agreeing on this methodology are what constitutes our justice systems.  I also suggest that both arise directly from of our capacity to weigh up alternative courses of action on the basis of competing claims to the right to a “good state”, and need have nothing to do with whether or not there is a God or gods who care either.

Man of all creatures
Is superlative
(Away melancholy)
He of all creatures alone
Raiseth a stone
(Away melancholy)
Into the stone, the god
Pours what he knows of good
Calling, good, God.
Away melancholy, let it go.

Speak not to me of tears,
Tyranny, pox, wars,
Saying, Can God
Stone of man’s thoughts, be good?
Say rather it is enough
That the stuffed
Stone of man’s good, growing,
By man’s called God.
Away, melancholy, let it go

Stevie Smith, “Away Melancholy

Although of course, if there is such a God, and that God is good, she might care very deeply.

 

 

375 thoughts on “Morality for dummies

  1. I want there to be a world where I am the Emperor surround by castrated eunuchs and beautiful woman at my disposal. If I have a strong enough weapon, and can convince enough people to listen to my wishes, then I can have a world I don’t want to change.

    What does it have to do with morality?

  2. Elizabeth,

    But your argument gives no explanation for why one should care about others state, as long as they achieve their own state that they don’t want to change.

    Its lacking that crucial connection.

  3. phoodoo: But your argument gives no explanation for why one should care about others state, as long as they achieve their own state that they don’t want to change.

    Its lacking that crucial connection.

    It wasn’t supposed to be an explanation It was supposed to show that what is good and bad is easily derivable from direct observation.

    That means that if care whether good and bad things happen to any creature other than ourselves, it’s not at all difficult to figure out what those good and bad things are, although sometimes there are tricky opposing claims, and the choice is between two bad things, rather than between bad and good.

    So having established that we do not need any divine revelation to figure out what is bad and what is good, we can now turn to the question as to why we care in the first place.

    Theists often say that we care because we each have a conscience that was divinely conferred on humans alone. Some say that we care because Eve ate from the Tree of Knowledge of Good and Evil.

    Biologists and psychologists might say that we care because we have evolved as social animals, where our own welfare is dependent on the welfare of others, necessitating the construction of social constructs, and an evolutionary advantage to the capacity to detect “cheating” behaviour. Also because we give birth to young that are, like the offspring of marsupials, still essentially embryos, and dependent on their parents for not just months but years, necessitating strong nurturing instincts in order to produce viable descendents.

    My point is that our capacity to care about harm to others, not just ourselves, is an observable trait in human nature. And my argument is that whether we think it evolved, was constructed by our forbears in order to facilitate effective societies, was divinely conferred, or was the result of Edenic Disobedience makes no practical difference.

    I’d say in many ways the first two are better, because they stop us getting distracted by alleged revelation of Divine Will, and allow us simply to base our decisions as to what actions will least harm and best benefit others based on clearly observable and anticipatable effects of our actions on their state of mental and physical health.

    I’d also argue that in practice, that is how many religions work. People don’t find Euthyphro’s dilemma much of a dilemma. They recognise, on the whole, what is good, and infer that if their god is good, that that is what their god commands. There are some aberrations, of course, such as William Lane Craig and his abhorrent Divine Command Theory approach, in which he defines as good what God wants, rather than the other way round.

    But luckily a lot of theists have more sense.

  4. Elizabeth: My point is that our capacity to care about harm to others, not just ourselves, is an observable trait in human nature. And my argument is that whether we think it evolved, was constructed by our forbears in order to facilitate effective societies, was divinely conferred, or was the result of Edenic Disobedience makes no practical difference.

    If it were only a matter of evolve-able usefulness through biology, we are also well aware that environments change. So what was good some years ago, doesn’t make it good now. If anyone REALLY believed that we only have these feelings of morality, because it was useful in biology, we would ignore them when we thought it was much less useful-like often.

    But of course no one actually believes that.

    Also, I think your argument is flawed when you say “we know what is good and bad”, as if these are the same from all angles of observation (that would be true of course from the divine perspective, but not necessarily true at all, from the chance one).

    Plus, you are still left with the giant assumption, that you have zero evidence for, that if a species get accidental DNA combinations that produced a moral desire (again fairy tales, but we will go with it) that this would favor their existence. Doesn’t seem to be the case in most animals I am afraid. Kill or be killed seems to be the more effective strategy in nature.

  5. phoodoo: If anyone REALLY believed that we only have these feelings of morality, because it was useful in biology, we would ignore them when we thought it was much less useful-like often.

    You yourself said that you can ignore what you know to be right and do what you want instead. So you are no different to anyone else, as it turns out. You do ignore what you know to be right sometimes.

  6. phoodoo: If I have a strong enough weapon, and can convince enough people to listen to my wishes, then I can have a world I don’t want to change.

    Your god apparently wants that to happen (no such thing as accidents, remember) as it’s happened many times in history.

    Zerjal et al. (2003)[8] identified a Y-chromosomal lineage present in about 8% of the men in a large region of Asia (about 0.5% of the world total). The paper suggests that the pattern of variation within the lineage is consistent with a hypothesis that it originated in Mongolia about 1,000 years ago, and thus several generations prior to the birth of Genghis. Such a spread would be too rapid to have occurred by genetic drift, and must therefore be the result of selection. The authors propose that the lineage is carried by likely male-line descendants of Genghis Khan and his close male relatives, and that it has spread through social selection due to the power that Genghis Khan and his direct descendants held and a society which allowed one man to have many children through having multiple wives and widespread rape in conquered cities.

    https://en.wikipedia.org/wiki/Descent_from_Genghis_Khan#DNA_evidence

  7. phoodoo: If it were only a matter of evolve-able usefulness through biology, we are also well aware that environments change. So what was good some years ago, doesn’t make it good now.

    I started off by proposing that a “good state” is a state organism tries to achieve or maintain, while “bad state” is a state an organism tries to change, and that “morality” is the methodology we use to try to figure out what to do when our actions are likely to produce conflicting results – our our “want vs should” resolving methodology, if you will.

    For an intelligent lone predator, that “morality” might consist of a set of internal rules that say things like “if you can stay hungry a bit longer, don’t go for the skinny easy-to-kill food critter, hang in there for a chance at the strong-but-juicy one”. In other words the “ought” comes in de-prioritising the immediate state (bad because hungry) and up-weighting a possible future state (full of juicy prime food critter, rather than half-full of skinny runt): “I want a meal now, but I ought to wait for a better opportunity”.

    But we are not intelligent lone predators. We are intelligent social animals. So our “morality” (our “want vs should” resolving methodology) extends to our fellow organisms. Which count as members of the fellow group is itself a question that requires resolution, but not one, I suggest, that religion helps with (well, Jesus answered the question with his Good Samaritan parable, but it’s amazing how few Christians pay any attention).

    So I suggest that as humans, we are stuck with a social morality i.e. one in which the candidates for whose welfare matters extend beyond ourselves. If, following some global catastrophe, our descendents emerge as lone predators, then their morality will change – indeed, we may not, prospectively, regard it as “morality” at all, if we define morality only as a “want vs should” methodology that extends beyond ourselves.)

    But for humans now, it’s the one we’ve got, because we are, like it or not, social animals.

    phoodoo: Doesn’t seem to be the case in most animals I am afraid. Kill or be killed seems to be the more effective strategy in nature.

    Well, no. There are many many examples of social species that do not operate that way, and have evolved or constructed complex social patterns of behaviour that allow the group to benefit at individual cost.

    In game theory terms, species that solve the Prisoner’s Dilemma tend to survive as social species. We are one of them.

  8. phoodoo: Plus, you are still left with the giant assumption, that you have zero evidence for, that if a species get accidental DNA combinations that produced a moral desire (again fairy tales, but we will go with it) that this would favor their existence. Doesn’t seem to be the case in most animals I am afraid. Kill or be killed seems to be the more effective strategy in nature.

    Robots have been observed to evolve altruistic behavior, totally independently, as a solution to problems they have faced.

    http://evolution.berkeley.edu/evolibrary/news/110501_robots

    In the most important part of their study, the researchers tested what would happen if the robots were more or less related to one another — i.e., if some of the robots in a group started off with computer programs that were genetically identical to one another. In these cases, the robots evolved exactly as we would expect them to based on Hamilton’s rule. The more closely related the robot group was (i.e., the more clones it contained), the more likely the whole group was to evolve altruistic behavior and ultimately wind up sharing all their food disks. And the less an individual robot lost by sharing a food disc and the more other robots benefited from shared food discs, the more likely altruism was to evolve. Hamilton’s rule did seem to hold in this simulation, which closely mimicked many aspects of a real, biological population.

  9. William J. Murray: I agree that it is a perfectly good morality for dummies.

    Whereas of course your superior morality allows you to claim that faith healers can cure cancer, not giving anyone who might hear such a claim false hope.

  10. Elizabeth,

    Except of course, for the times when its more beneficial to kill or steal-which people do all the time.

    Evolution the theory that explains everything that exists, by saying it exists, so evolution must explain it.

  11. OMagain,

    Haha. I can introduce you to lots of video games where it is definitely more beneficial to kill as many things as you can.

  12. phoodoo: Except of course, for the times when its more beneficial to kill or steal-which people do all the time.

    Except theists, who never do anything wrong.

    phoodoo: Evolution the theory that explains everything that exists, by saying it exists, so evolution must explain it.

    Actually I prefer your “explanation” now. Let there be light. So much more satisfying.

  13. phoodoo: Haha. I can introduce you to lots of video games where it is definitely more beneficial to kill as many things as you can.

    I’ve worked on multiple AAA videogames that have sold millions of copies around the world. Somehow I doubt you have.

    No real point, just rubbing it in.

  14. phoodoo: Haha. I can introduce you to lots of video games where it is definitely more beneficial to kill as many things as you can.

    In those video games the “things” are usually pre-programmed and scripted. That you think you’ve made some kind of point here is amusing.

    So, to be clear, you don’t see any difference between a scientific experement that demonstrates the independent evolution of altruism and a computer game were the actions are mostly scripted?

    I’m starting to see why you have such problems understanding any one else’s point of view.

  15. Elizabeth,

    You still haven’t explained in your logical preposition, how you made the jump from, “We want to be in a state that’s good” to “we want others to be in a state that is good.”

    You just pulled that assumption out of thin air, and ask the people following your logic to accept its true.

  16. phoodoo: Sad news for you, Computer simulations are totally scripted.

    No, they are not! If they were computers would not be able to generate new knowledge. And they demonstrably do.

  17. phoodoo: Sad news for you, Computer simulations are totally scripted.

    And that’s it is it? Therefore morality has to come from god, because demonstrating it emerging from interacting autonomous agents does not count because reasons.

  18. phoodoo: Sad news for you, Computer simulations are totally scripted.

    Odd how things can happen that surprise the creators of such simulations then, if it’s all scripted in advance.

    http://www.uvm.edu/~uvmpr/?Page=news&storyID=11482

    And this may help to explain the most surprising — and useful — finding in Bongard’s study: the changing robots were not only faster in getting to the final goal, but afterward were more able to deal with new kinds of challenges that they hadn’t before faced, like efforts to tip them over.

  19. Elizabeth: Good.So let’s hear less about how atheists aren’t entitled to moral outrage.

    Hear less from whom? I didn’t say atheists weren’t entitled to their moral outrage. Under proxy-atheism, dummies are also entitled to do and say all sorts of stupid, irrational things by any accidental physico-chemical process that generates the behavior.

    What you don’t have a logical basis for, which I have repeatedly argued, is claiming or implying that your outrage is anything other than a rather petulant expression of personal dislike or preference generated by a string of accidental physico-chemcial interactions.

    If you personally, subjectively, fallibly feel like humans should behave like social animals, and personally, subjectively, fallibly feel like humans should express that social interaction in certain ways that you personally, subjectively, fallibly like/prefer, you’re entitled by haphazard physico-chemical processes to pitch a wall-eyed fit every time someone uses the word “homo” or creates a film portraying planned parenthood in a negative light.

    Haven’t you heard? We live in the age of entitlement. You’re entitled to blow up every “microagression” into a national cause if you feel like it.

    There’s just no reason for me, or anyone else, to give two shits about your personal, haphazardly-generated personal morality. No reason I should, at all, because my haphazard chemicals just ain’t feelin’, it, dawg.

  20. phoodoo,

    You still haven’t explained in your logical preposition, how you made the jump from, “We want to be in a state that’s good” to “we want others to be in a state that is good.”

    You haven’t really explained ‘moral outrage’ either. Why do you experience a visceral sensation of disquiet when someone is cruel to someone else? Saying “it’s God” answers the question no better than “it’s genes-plus-environment”.

    (Kill-or-be-killed, incidentally, is not a universal law of selection. Selection is but rarely about directly reducing the numbers of your competitors).

  21. William J. Murray: What you don’t have a logical basis for, which I have repeatedly argued, is claiming or implying that your outrage is anything other than a rather petulant expression of personal dislike or preference generated by a string of accidental physico-chemcial interactions.

    So, same as you then.

    William J. Murray: There’s just no reason for me, or anyone else, to give two shits about your personal, haphazardly-generated personal morality.

    Except in the outcomes. And I would hazard a guess that your life outcomes are poor. Hauntings, ghosts, alien abductions, gun battles with teens, faith healers for cancer cures and so on. None of that makes for a rational existence.

    William J. Murray: No reason I should, at all, because my haphazard chemicals just ain’t feelin’, it, dawg.

    And yet here you are, desperately trying to justify to yourself your beliefs. As that is what all of this is about.

    I mean, if you really thought Elizabeth was a robot-automaton-bag-of-chemicals then why are *you* wasting your time trying to argue your position with such a thing? It would be like telling a rock about shakespeare. And yet here you are, doing exactly that.

  22. William J. Murray,

    There’s just no reason for me, or anyone else, to give two shits about your personal, haphazardly-generated personal morality. No reason I should, at all, because my haphazard chemicals just ain’t feelin’, it, dawg.

    So here we are again. Why should anyone give two shits about what you think ‘it’ wants? My soul just ain’t tuned in, ‘dawg‘. If I have one.

  23. William J. Murray: Under proxy-atheism, dummies are also entitled to do and say all sorts of stupid, irrational things by any accidental physico-chemical process that generates the behavior.

    You often begin your case with the phrase “under [insert noun du jour for what you think the other person’s world view is] X is Y”, But you rarely define the referent for the noun, and never, as far as am aware, present an argument for why, under that [noun du jour] X must equal Y.

    In this case you seem to be saying that [proxy-atheism] provides no justification for moral outrage.

    If you think that [proxy atheism] (it was ]physicalism] last time I think) is what I hold, explain why what I have said is inconsistent with it.

  24. phoodoo: Sad news for you, Computer simulations are totally scripted.

    This is incorrect. The program that runs the simulation may be “totally scripted” (or may not be – it may use an external RNG, for instance) but the outcome is not.

    It’s why we run the simulation – to find out the outcome.

  25. EL said:

    You often begin your case with the phrase “under [insert noun du jour for what you think the other person’s world view is] X is Y”, But you rarely define the referent for the noun,

    I’m writing for those who understand what I mean.

    In this case you seem to be saying that [proxy-atheism] provides no justification for moral outrage.

    No, that’s not what I said. In fact, I explicitly said the opposite. Proxy-atheism physically justifies everything that actually occurs. It justifies your moral outrage, and the moral outrage that exists in opposition to your moral outrage.

  26. Allan Miller asks:

    So here we are again. Why should anyone give two shits about what you think ‘it’ wants?

    The reason to care about an objective morality would be the necessary consequences, good and bad, for moral and immoral behavior.

  27. William J. Murray: The reason to care about an objective morality would be the necessary consequences, good and bad, for moral and immoral behavior.

    fmm refuses to answer if he’d push 1 Christan onto the tracks to save 5 abortion doctors. Would you?

  28. William J. Murray: The reason to care about an objective morality would be the necessary consequences, good and bad, for moral and immoral behavior.

    Consequences that happen after you die, right? Or are you thinking of something different? As there does not appear to be any divine justice in this life!

  29. William J. Murray,

    Me: So here we are again. Why should anyone give two shits about what you think ‘it’ wants?

    WJM: The reason to care about an objective morality would be the necessary consequences, good and bad, for moral and immoral behavior.

    And those are … ?

  30. OMagain said:

    Except in the outcomes. And I would hazard a guess that your life outcomes are poor. Hauntings, ghosts, alien abductions, gun battles with teens, faith healers for cancer cures and so on. None of that makes for a rational existence.

    As I have said before, my life outcomes have been amazing – far beyond anything I ever hoped for or thought possible. I have changed my beliefs many times when I found them unproductive.

    And yet here you are, desperately trying to justify to yourself your beliefs. As that is what all of this is about.

    I think this is more of a convenient characterization than an objective assessment of my contributions here.

    I mean, if you really thought Elizabeth was a robot-automaton-bag-of-chemicals then why are *you* wasting your time trying to argue your position with such a thing? It would be like telling a rock about shakespeare. And yet here you are, doing exactly that.

    You are assuming that my purpose here is to convince others of something. As I have repeatedly stated, it is not.

  31. Elizabeth: What if you actually aren’t making sense, and those who think they understand you are equally confused?

    How could you tell?

    Whether or not I can tell is irrelevant to my purpose here.

  32. Elizabeth:
    What IS your purpose, if I may ask?(I can think of possiblities, amusement being one)

    I’m here to examine my own views in a forum where they are challenged by others and to offer my perspective to those that may find that perspective useful.

  33. I think it is not a bad start for how to naturalize ethics.

    I do think that pain is intrinsically normative for sentient organisms, and that all sentient organisms will act so as to avoid pain (unless there is a long-term goal that can be only be accomplished with short-term pain).

    But I don’t think that morality can be treated in a utilitarian, cost/benefit summation of all relevant pains and pleasures.

    As far as moral theory goes, I’m a strong supporter of virtue ethics and some extent care ethics. But while care is (I would say) the foundation of virtue, and care also evolves (evolutionarily) from empathy, empathy is not sufficient for care.

    In fact there are really serious problems with making empathy or sympathy the foundation of morality, because it is a fact of human psychology that we are more likely to experience empathy towards those who are more similar to us — with the consequence being that empathy can become a tool of dehumanization.

    In order to capture, at the theoretical level, the implicit universality of morality — e.g. that what is immoral is what should not happen to anyone, etc. — we need a way of thinking about suffering and care that transcends the limits of tribalism and group-thinking.

  34. Kantian Naturalist: But I don’t think that morality can be treated in a utilitarian, cost/benefit summation of all relevant pains and pleasures.

    I don’t think I suggested that it did. I was careful to say that I was regarding “morality” as the methodology by which we resolve the “want-ought” tension. I did not specify a methodology”

    Though I think I tend towards a utilitarian approach myself, I wouldn’t call it “cost/benefit summation“.

    Kantian Naturalist: empathy is not sufficient for care.

    And possibly not necessary either.

    Kantian Naturalist: In fact there are really serious problems with making empathy or sympathy the foundation of morality, because it is a fact of human psychology that we are more likely to experience empathy towards those who are more similar to us — with the consequence being that empathy can become a tool of dehumanization.

    Absolutely agreed.

    Kantian Naturalist: In order to capture, at the theoretical level, the implicit universality of morality — e.g. that what is immoral is what should not happen to anyone, etc. — we need a way of thinking about suffering and care that transcends the limits of tribalism and group-thinking.

    Yes. That’s why I like whoever said (Rawls?) “do what an unbiased judge would do” (to paraphrase my faulty remembering).

  35. OMagain: fmm refuses to answer if he’d push 1 Christan onto the tracks to save 5 abortion doctors. Would you?

    My decision in such a circumstance wouldn’t be affected by knowing those qualities about those involved.

  36. William J. Murray: My decision in such a circumstance wouldn’t be affected by knowing those qualities about those involved.

    So what would your decision be, do you think?

    (Not that I’m a great fan of trolley problems).

  37. Allan Miller asks:

    And those are … ?

    Ultimately, I think the consequence to immoral behavior is increasing spiritual pain and self-destruction, and the consequence to moral behavior is spiritual healing and actualized/realized agape. I think these consequences operate on both a relatively immediate and a long-term, cumulative manner.

  38. Elizabeth: So what would your decision be, do you think?

    (Not that I’m a great fan of trolley problems).

    I have no stock, formulative answer. It depends on far more contextual information than is provided or even can be provided. I might push someone on the tracks or I might not. I think in most such scenarios, if I could, I’d rather jump on the tracks myself. I don’t think there is a clear moral answer one way or another, so my decision would be based on the context of the situation.

  39. William J. Murray: It depends on far more contextual information than is provided or even can be provided. I might push someone on the tracks or I might not. I think in most such scenarios, if I could, I’d rather jump on the tracks myself.

    Exactly,

    How do you know for instance if one person will be enough to stop the train.

    peace

Leave a Reply