What would Darwin do?

At Evolution News and Views, David Klinghoffer presents a challenge:

Man needs meaning. We crave it, especially when faced with adversity. I challenge any Darwinist readers to write some comments down that would be suitable, not laughable, in the context of speaking to people who have lived through an event like Monday’s bombing. By all means, let me know what you come up with.

Leaving aside Klinghoffer’s conflation of “Darwinism” with atheism, and reading it as a challenge for those of us who do not believe in a supernatural deity or an afterlife (which would include me), and despite lacking the eloquence of the speakers Klinghoffer refers to, let me offer some thoughts, not on Monday’s bombing, specifically, but on violent death in general, which probably touches us all, at some time.  Too many lives end far too soon:

We have one life, and it is precious, and the lives of those we love are more precious to us than our own.  Even timely death leaves a void in the lives of those left, but the gap left by violent death is ragged, the raw end of hopes and plans and dreams and possibilities.  Death is the end of options, and violent death is the smashing of those options;  Death itself has no meaning. But our lives and actions have meaning.  We mean things, we do things, we act with intention, and our acts ripple onwards, changing the courses of other lives, as our lives are changed in return.  And more powerful than the ripples of evil acts are acts of love, kindness, generosity, and imagination. Like the butterfly in Peking that can cause a hurricane in New York, a child’s smile can outlive us all. Good acts are not undone by death, even violent death. We have one life, and it is precious, and no act of violence can destroy its worth.

823 thoughts on “What would Darwin do?

  1. If you don’t think that the teenager deserves to be obeyed by the inhabitants of the universe he creates, then you are conceding that “because he created me” is not a sufficient reason to obey God, either.

    Why are we morally obligated to obey God, then?

  2. William has responded to item #1 from my long list of criticisms. I hope he’ll also address items 2-8.

    He writes:

    When “because I think so” is assumed to be your fundamental source of morality, there is no reason to believe that what you think may be wrong…

    Untrue. You can still err in your choice of moral axioms (by not realizing that they imply something unacceptable to you), or you can mess up in the chain of reasoning from axioms to conclusion.

    What you don’t have, after eliminating errors of the above types, is a way of objectively determining whether your axioms are “correct”. However, your system doesn’t provide that either.

    What keiths continues to do is evaluate the proposition of an absolute morality from the lens that no such thing exists…

    No, I’ve pointed out that even if we assume that absolute morality exists, we know – and you have conceded – that humans don’t have perfectly reliable access to it. Thus, under your system, any assertion of moral axioms is tantamount to “X is moral/immoral because I think this is what God wants.”

    To put it differently, all any of us can do is to operate according to what he or she thinks is moral. But even if we assume the existence of objective morality, we have no idea whether what we think is moral is in line with it. It might be well aligned with objective morality, or totally at odds with it, or anything in between.

    Your criticism of subjective morality is correct — it can’t be used to conclusively resolve fundamental moral disagreements — but the same criticism applies to your own system.

    keiths might argue that there is no way to scientifically measure that which they agree upon is a self-evident truth, but I never claimed that morality lied in the domain of science to be able to measure or codify.

    The problem is that there is no known reliable way, scientific or otherwise, to determine whether something is objectively moral.

    My criticism stands as written:

    1. William complains that under “Darwinian morality”, moral disputes can’t be conclusively resolved because the justification for a moral decision boils down to “because I, personally, think so.” He doesn’t seem to realize that moral disagreements can’t be adjudicated under his system, either, because the justification for a moral decision boils down to “because I, personally, think this is what God wants.”

    Well, that didn’t go very well for you, William. Do you want to tackle one of my other seven criticisms, or have another go at this one?

  3. William, my argument that there is an objective (i.e. independent of either) way of determining which of two codes of conduct, one altruistic and one non-altruistic, is the better system, is that for a social species, an altruism-based moral code will tend to outperform other systems in terms of benefit for the members of those systems (in other words, self-interest, paradoxically, is best served by altruism).

    That doesn’t of course tell us that it is the more moral system; what it tells us is that an altruistic code of conduct works best for human societies (maximises the benefit for all members) – it doesn’t work for sharks. And it turns out that the altruistic morality that emerges from the constraints of social interdependence (according to game theory, see Robert Axelrod) are extraordinary like traditional morality:

    • Be nice
    • But not so nice you do not retaliate when people cheat
    • Nonetheless forgive
    • And don’t be greedy.

      Which is why I suggest that altruistic morality is what people usually mean when they talk about “morality” (although I concede there are a few exceptions, like Ayn Rand): a system by which people are basically cooperative, but enforce justice tempered by mercy against cheaters.

  4. William, my argument that there is an objective (i.e. independent of either) way of determining which of two codes of conduct, one altruistic and one non-altruistic, is the better system, is that for a social species, an altruism-based moral code will tend to outperform other systems in terms of benefit for the members of those systems (in other words, self-interest, paradoxically, is best served by altruism).

    This assumes the consequent that all moral codes should have to do with however you define “social performance” or “benefit”, which is exactly what is being challenged – the basis of the morality.

    But I don’t guess pointing that out for the nth time is going to be the charm. You cannot have your relativistic cake and eat it, too, Liz. You’re just using the term “objective” to keep you from having to accept what relativism necessarily means, much as determinists use the term “compatibalism” to hide from the consequence of their beliefs.

    As I said, my argument banks on people not being willing to go where they have to go to prove the argument wrong. Few are, because they find it unpalatable.

    However, nobody is required to believe only that which is rationally coherent. People can believe whatever they wish, or whatever they are programmed to believe, whether it can be rationally defended or not.

    Another interesting question is: why worry about whether or not your beliefs about morality can be rationally defended? I mean … what difference does it make?

  5. William J. Murray: This assumes the consequent that all moral codes should have to do with however you define “social performance” or “benefit”, which is exactly what is being challenged – the basis of the morality.

    No, it doesn’t assume the consequent.

    It merely asks: what code of behaviour will tend to emerge in human societies (i.e. non-arbitrarily, and independently of what I, or anyone else, wants it to be)? And the answer turns out to be; cooperative codes with cheater-detection.

    No consequent is assumed.

    But I don’t guess pointing that out for the nth time is going to be the charm. You cannot have your relativistic cake and eat it, too, Liz. You’re just using the term “objective” to keep you from having to accept what relativism necessarily means, much as determinists use the term “compatibalism”to hide from the consequence of their beliefs.

    You defined “relative morality” (at least – I am assuming that “relativistic” is a synonym) as being “because I/we think so” morality. What I have argued is that a cooperative-with-cheater detection morality tends to emerge whether I think I think so or not. In other words, it’s objectively the system that wins out, not because I choose it (after all, if I were Ayn Rand, I’d choose something different, but I’d be, empirically, demonstrably, wrong). So no, I’m not having my cake and eating it, unless you have changed your definition of “relative morality”.

    As I said, my argument banks on people not being willing to go where they have to go to prove the argument wrong. Few are, because they find it unpalatable.

    However, nobody is required to believe only that which is rationally coherent. People can believe whatever they wish, or whatever they are programmed to believe, whether it can be rationally defended or not.

    Another interesting question is: why worry about whether or not your beliefs about morality can be rationally defended? I mean … what difference does it make?

    It only makes a difference to whether your argument for God makes any sense. If an atheistic morality can be rationally defended – and I argue that it can, indeed it can be modelled on a system of logic circuits, as I said – then your argument for God fails.

    Doesn’t mean that God doesn’t exist, just means that it’s not a good argument that God does.

  6. No, it doesn’t assume the consequent.

    Yes, it does. Blatantly. Obviously. You assume that morality is a “socially emergent tendency’ and that the “correct” morality is by definition the one that “wins out” – however you happen to define that.

    IOW, your “morality” is the same as “survival of the fittest”; it’s not a description of how one should act (a prescriptive “most fit” morality), but merely a description (according to you) of whatever aggregate morality happens to have won out (supposedly) in the competition of survival of societies. Meaning, if it just so happens that the “socially emergent tendency” that “happens to have won out” was one where might makes right was the descriptive motto, then might makes right would be, by your Darwinistic system, the correct morality.

  7. What I have argued is that a cooperative-with-cheater detection morality tends to emerge whether I think I think so or not.

    As if it is necessarily true that “what tends to emerge” is “what morality is”. Assuming the consequent. I disagree that behavioral systems “that tend to emerge” has anything whatsoever to do with what is moral. That morality has to do with “behaviors that tends to emerge” is only “because I/we say so”, and cannot have any other basis.

    You keep referring to “objective” considerations only after you’ve assumed the consequent and have assigned a “because I/we say so” definition of “what morality is and how it should be measured” in the first place. Yes, you might be able to objectively measure “what tends to emerge” in successful societies, but that is only possible after you have defined “what behavior tends to emerge” as “morality” , and have also defined what “successful” means.

    It is interesting that you are now basically making a Darwinistic case, pretty much defining morality as “what tends to emerge” much like one uses “fitness” as a description of “whatever trait happens to survive”. Because a behavior tends to emerge in social groups can only be defined as “morality” in a descriptive, not a prescriptive sense. Under your argument, if “might makes right” tends to emerge (which I think there is a far better case for than altruism), then it would, under your system, be the “correct” morality by definition. Just as whatever happens to survive, under biological Darwinism, is defined as “fit”.

  8. OK, let me rephrase: you make the working assumption that there are two classes of human being: free-will owners and biological automata. even though you have no evidence to support your model?

    I also hold that there are some people with free will that are employing it in a way that would make them indistinguishable from biological automatons. So, that might make for a third classification in terms of free will.

    I have evidence that supports that outlook – I’ve explained some of it here in how I describe some of the behaviors. For example, I think that when people admit that they do not choose their beliefs, it is evidence that they either do not have free will, or that they are essentially using it to “become” like a BA. But I don’t consider that to be evidence that rises to the level of supporting any such claim – which is why I don’t claim it to be so. Also, some people will testify (tell you) that they do not have free will. That’s also evidence – but again, not enough to make a case out of.

    It’s just a working model that helps me in my day to day life – I don’t assert that it is true.

  9. William to Lizzie:

    You assume that morality is a “socially emergent tendency’ and that the “correct” morality is by definition the one that “wins out” – however you happen to define that.

    No, she doesn’t. Read what she wrote:

    That doesn’t of course tell us that it is the more moral system; what it tells us is that an altruistic code of conduct works best for human societies (maximises the benefit for all members)…

    For someone who complains so much about being misunderstood, William, you’re awfully lazy when it comes to understanding others.

  10. William J. Murray: As if it is necessarily true that “what tends to emerge” is “what morality is”. Assuming the consequent. I disagree that behavioral systems “that tend to emerge” has anything whatsoever to do with what is moral.That morality has to do with “behaviors that tends to emerge” is only “because I/we say so”, and cannot have any other basis.

    William, your goalposts are moving. We agreed, I thought, that morality is about what we “ought” to do. What tends to emerge in human societies are codes of conduct – systems that lay down what we “ought” to do. So by your own agreed definitions, moral systems are codes of conduct, and they self evidently emerge. I thought we’d got beyond that to the issue of how to distinguish (if we can) between different codes of conduct, and whether there is an non-arbitrary objective standard that can be used to rank them (i.e. not a “because I/we say so” standard). Again, I’m using your debate parameters here. I am assuming no consequent.

    What I am saying is that literally, logically, the code-of-conduct that emerges as the winning strategy for societies of interdependent actors is one that looks extremely like traditional altruistic morality (do as you would be done by) with justice-tempered-by-mercy for chaters. This is not “because I say so”; it’s what can be observed to emerge independently in human societies (the Golden Rule seems to have many independent origins), and can even be spat out by a computer. And it is the “winning” strategy in the sense that it is the one that best serves the individual actors. In other words, our own interests are best served by it, just as in your system, people are rationally moral because they know ultimately there are consequences for immorality.

    You keep referring to “objective” considerations only after you’ve assumed the consequent and have assigned a “because I/we say so” definition of “what morality is and how it should be measured” in the first place.

    No, I have not. I have defined a morality as a system of oughts, just as you have, and I have demonstrated that it is possible to derive a perfectly rational set of oughts from first principles that do not depend on what I say, and which, like yours, serve your ultimate interest.

    Yes, you might be able to objectively measure “what tends to emerge” insuccessful societies, but that is only possible after you have defined “what behavior tends to emerge” as “morality” , and have also defined what “successful” means.

    Nope. I have defined morality as the domain of oughts. I’ve even expanded it to include self-interested oughts, so that I am no longer even assuming that altruistic oughts are in the moral camp. I accept that Ayn Rand regarded selfish oughts as moral oughts, and any argument I make about whether one morality trumps another, objectively, another needs to take into account moralities like Rand’s.

    What I’m saying is that if you put lots of codes of conduct into a barrel, and let them fight it out, the one that emerges as the winning strategy is not Rand’s but something more like Jesus’s/Confucius’s/Rabbi Hillel’s.

    It is interesting that you are now basically making a Darwinistic case, pretty much defining morality as “what tends to emerge” much like one uses “fitness” as a description of “whatever trait happens to survive”.

    I’m certainly making a Darwinist case for the emergence of morality, but I’m not defining morality as “what tends to emerge”. Eyes “tend to emerge”, but eyes are not morality. I’m defining morality as codes of conduct (what we “ought” to do, when it conflicts with what we want to do). I thought we’d agreed on that.

    And I’m saying that the code of conduct that emerges as the winning strategy for societies of interdependent individuals is altruism-based – cooperative.

    Because a behavior tends to emerge in social groups can only be defined as “morality” in a descriptive, not a prescriptive sense.

    All morality is prescriptive. Unless you no longer agree that morality is about what we “ought” to do?

    Under your argument, if “might makes right” tends to emerge (which I think there is a far better case for than altruism), then it would, under your system, be the “correct” morality by definition.

    And if black were white, night would be day. And if we were a solitary species capable of reflective thought, our best code of conduct would probably not be altruistic.

    But the fact is that we are a social species – we live in groups. That means that our best code of conduct (the one that maximises personal benefit) is cooperative.

    Just as whatever happens to survive, under biological Darwinism, is defined as “fit”.

    Well, not quite correct, but yes, indeed, altruistic morality will tend to evolve in intelligent social species, because it promotes the welfare of both the individual and the group. In other words, it is objectively, empirically, rationally, coherently, true that cooperative morality is more beneficial than selfish morality.

    Ergo Darwinian morality is rational.

    QED.

  11. I thought we’d agreed on that.

    QED

    You see? You really can believe whatever you wish 🙂

  12. Liz said:

    What I am saying is that literally, logically, the code-of-conduct that emerges as the winning strategy for societies of interdependent actors is one that looks extremely like traditional altruistic.

    My disagreement is that “the code-of-conduct that emerges as the winning strategy for societies of interdependent actors” has anything to do with morality, so what it “literally, logically” looks like and how it can be “objectively measured” are irrelevant until you convince me of your premise about what morality is in the first place.

    And that is the challenge where your logic fails – where you constantly argue from the perspective that assumes the premise that is being challenged.

    And if black were white, night would be day. And if we were a solitary species capable of reflective thought, our best code of conduct would probably not be altruistic.

    Again, you assume morality inherently has to do with whether or not a species is social, simply because a large part of it has to do with how we treat others, as if social success or personal success (however that is defined) is the de facto goal of morality.

    However, that you think in terms of black and white (social/non-social) about what morality is about demonstrates your (thus far) inability to conceptualize beyond your imposed definition of morality. Just because morality gives you some oughts about how to treat others doesn’t mean it is about how to treat others; if that is what you think – that morality is “about” how you treat others, then your Darwinian system is trying to get an ought from an is – IOW, however we happen to treat others (in a successful society) is how we ought treat others.

    Saying that morality is a code of oughts is meaningless. Oughts require a purpose. What is the purpose your system of altruistic oughts is in service to?
    Is your system of “oughts” not in service to “a successful society” (whatever that means)?

    I disagree that the purpose of morality is “a successful society”.

    It’s interesting that this is basically the same conceptual barrier that pops up in the Darwinsm/ID argument; the Darwinistic unproven assumption of sufficiency of cause by the categorical forces of necessity and chance, and their inability to conceptualize outside of that assumption to understand the criticism levied against it.

    I’m not limited to your assumption of what morality is about, Liz. The fact that you “thought we had agreed” to it, and now think I’m “moving the goal post” (when I’ve consistently, flatly denied and repeatedly argued against what you “thought we had agreed upon” only illustrates the depth of the chasm between what I say and what you read and interpret.

  13. Lizzie: No, it doesn’t assume the consequent.

    It merely asks: what code of behaviour will tend to emerge in human societies (i.e. non-arbitrarily, and independently of what I, or anyone else, wants it to be)?And the answer turns out to be; cooperative codes with cheater-detection.

    May it is not assume the consequent, but developing cooperative codes has nothing to do with this moral code:

    Lizzie:

    • Be nice
    • But not so nice you do not retaliate when people cheat
    • Nonetheless forgive
    • And don’t be greedy.

    That goes far beyond cooperative codes.

    The biggest problem with your rationale is why individuals should reduce his personal expectations in order to everybody as a mean achieve more of his individual expectations.
    The second is that your morality is based on the observtions of Axelroad and I beat you that for each observation of altruistic winning behavior I will give you two of shelfish winning behavior.

  14. Lizzie:

    Well, not quite correct, but yes, indeed, altruistic morality will tend to evolve in intelligent social species, because it promotes the welfare of both the individual and the group.In other words, it is objectively, empirically, rationally, coherently, true that cooperative morality is more beneficial than selfish morality.

    So there is no conflict between group and individual welfare! Nice.I wonder Why Pol pot or Stalin killed millions?

  15. Blas: So there is no conflict between group and individual welfare! Nice.I wonderWhy Pol pot or Stalin killed millions?

    For the greater good.

    I think you are confusing rationality and reasoning with having correct premises.

  16. Blas,

    You’re not thinking very carefully. To say that something promotes the welfare of both the individual and the group is not the same as saying that there is never any conflict between individual and group welfare.

    Suppose I give 100,000 to you and William with instructions that each of you gets at least10,000, but that you can allocate the remaining $80,000 between yourselves any way that you see fit. You both benefit from the gift, but there is still a potential conflict over how it is apportioned.

  17. My criticisms are based on your assumptions, your arguments and your claims, not my imagination.

    I take it you can’t defend your position. If you could, you undoubtedly would.

    P.S. There are actually nine criticisms, not eight as I said earlier. My mistake.

  18. keiths:
    My criticisms are based on your assumptions, your arguments and your claims, not my imagination.

    I take it you can’t defend your position.If you could, you undoubtedly would.

    P.S. There are actually nine criticisms, not eight as I said earlier.My mistake.

    No, your criticisms are based on your incorrect interpretation of what I’ve written here, nothing more, and appear to be immune to my attempts to correct you.

    You do not have access to “my assumptions”, you only have access to what you interpret of what I write; if you are unwilling (or unable) to alter your view of my position even when I correct you, responding to you erroneous, inapplicable challenges is unwarranted.

  19. Simply saying another poster is wrong is not useful. If you wish to engage in an adult conversation you need to explain how our interpretation differs from your intention an rephrase your position, illustrating it with concrete examples where possible.

  20. petrushka:
    Simply saying another poster is wrong is not useful. If you wish to engage in an adult conversation you need to explain how our interpretation differs from your intention an rephrase your position,illustrating it with concrete examples where possible.

    I’ve already explained why his interpretation on several points is incorrect prior to this post of his. It’s not my job to reiterate that which I have already pointed out to those that do not accept my corrections about what they imagine my position to be.

  21. William J. Murray: I’ve already explained why his interpretation onseveral points is incorrect prior to this post of his. It’s not my job to reiterate that which I have alreadypointed out to those that do not accept my corrections about what they imagine my position to be.

    You don’t have a job here. You have an opportunity.
    Edit to add:

    The opportunity is not to convince us. We all know the odds are against convincing anyone on the internet.

    I mean you have the realistic opportunity to make the best possible case for your position, and that means adapting and modifying your argument to clear up misunderstandings.

    That requires trying to see your opponent’s point of view (if you actually have free will, you can freely choose to see things from your opponent’s point of view).

    And having seen things from your opponent’s point of view, you can rephrase your argument to compensate.

  22. William,

    I’ve already explained why his interpretation on several points is incorrect prior to this post of his.

    And I’ve already explained why your objections are invalid, and why my criticisms do apply to your argument and your moral system. The ball’s in your court.

    You’ve already spilled thousands of words in this thread. If you actually thought you could defend your position, I have no doubt that you would spill a few more.

    Your silence speaks volumes, as KF would say.

  23. Blas: So there is no conflict between group and individual welfare! Nice.I wonderWhy Pol pot or Stalin killed millions?

    Of course there is a conflict. That’s the point I’ve been making for pages now: that morality arises from the conflict between your own welfare and the welfare of others – when there is an “ought” as well as a “want” (although “ought” is also used to indicate a conflict between what we want now and what we want later).

    We recognise Pol Pot and Stalin as evil because they prioritised their own wants over the welfare of others.

  24. Lizzie:

    We recognise Pol Pot and Stalin as evil because they prioritised their own wants over the welfare of others.

    You have a particular view of the facts! I´m not surprised you are a darwinist.
    Pol Pot and Stalin tried to implement an altruistic society where there will not be more need to harm anyone. You can disagree in the method but that should not be against your moral social oriented.

  25. When you say Pol Pot et al tried to implement an altruistic society, I think you are conflating the adverting with the product. What they tried to implement was totalitarian rather than altruistic. If you are selling bags of shit you probably want to call it something else.

  26. petrushka:
    When you say Pol Pot et al tried to implement an altruistic society,I think you are conflating the adverting with the product. What they tried to implement was totalitarian rather than altruistic. If you are selling bags of shit you probably want to call it something else.

    No, you are evaluating the facts according to your believes, they always said were truly convinced that comunism were far better than capitalism. Why I have to believe that you know more than them what they thought? How do you know they were lying?

  27. No. Morality is always about consequences and outcomes and never about intentions. The road to hell is paved with good intentions.

    Don’t conflate political rhetoric designed to achieve power with morality. Propaganda is not morality.

  28. petrushka:
    No. Morality is always about consequences and outcomes and never about intentions. The road to hell is paved with good intentions.

    Tell that to Lizzie, because here

    Be nice
    But not so nice you do not retaliate when people cheat
    Nonetheless forgive
    And don’t be greedy.

    I see only good intentions.
    And if there is no God, there is no hell. Stalin and Polpot are in the same condition than your grandmother.

  29. I think morality is about intentions, but to attempt to build an altruistic society (if that was what they were trying to do) by being non-altruistic is, at best, oxymoronic.

  30. Lizzy, I don’t think you can build a successful morality on intentions as commonly understood. I know that Christianity emphasizes what is in your heart, but I would argure the Harry Potter Principle: “Where your treasure lies, there your heart will be also.” I wish I knew the original source.

    Taken figuratively, it means we fool ourselves. Individuals and societies fool themselves. It’s even worse when public policy goes after short term goals with flowery sounding rhetoric.

    There is general agreement that societies should organize themselves to maximize well being and minimize pain. But the devil is in anticipating long term consequences.

    Edit: found the quote, not far from where I expected.

  31. petrushka:
    : “Where your treasure lies,there your heart will be also.” I wish I knew the original source.

    Search in the Gospels.

  32. Lizzie:
    I think morality is about intentions, but to attempt to build an altruistic society (if that was what they were trying to do) by being non-altruistic is, at best, oxymoronic.

    Maybe, it depends of what you mean by altruistic, harm and others. But that should be easy for your logical morality.

  33. Would anyone disagree that morality is always about the greater good, the good of groups as it differs from that of individules? There isn’t a lot of discussion about the morality of eating stuff that you like or driving the best car you can afford.

    Morality is about conflicts between individual wants and group well being. Morality, regardless of how it is derived, is imposed. Even Ayn Rand’s morality is imposed. She just had a different theory of what constitutes the greater good.

    But disagreements over projected consequences are the stuff of moral thought. That is why I think it is impossible to have a completely rational morality. We do not know the future, and we do not know the long term consequences of policies and laws.

  34. petrushka:
    Would anyone disagree that morality is always about the greater good,the good of groups as it differs from that of individules? There isn’t a lot of discussion about the morality of eating stuff that you like or driving the best car you can afford.

    Morality is about conflicts between individual wants and group well being. Morality,regardless of how it is derived,is imposed.Even Ayn Rand’s morality is imposed. She just had a different theory of what constitutes the greater good.

    But disagreements over projected consequences are the stuff of moral thought. That is why I think it is impossible to have a completely rational morality. We do not know the future,and we do not know the long term consequences of policies and laws.

    Agree, then we have Petrushka, Kairosfocus, probably WJM and me that find that atheist cannot make a founded and logical morality. Lizzie and keiths still insists.

  35. I think theists are at a much greater disadvantage. Rational deductions from false premises are much more dangerous than evolved norms.

  36. petrushka:
    I think theists are at a much greater disadvantage. Rational deductions from false premises are much more dangerous than evolved norms.

    We do not know the future, and we do not know the long term consequences of policies and laws.

  37. That’s true regardless of how you attempt to found your morality.

    Rules can rationally be derived from any set of premises. Rationality is not dependent on whether premises are true or false.

    Morality can never be perfect or complete because moral behavior is about consequences, and consequences cannot be perfectly anticipated. But as Lizzy has pointrd out, we can devise rational strategies for coping with indeterminancy.

    I get the feeling, though, thst you have some list of behaviors that you think can be derived from theistic morality but not from non-theistic morality.

  38. Petrushka,

    Would anyone disagree that morality is always about the greater good, the good of groups as it differs from that of individules?

    Lots of people. Of the three major categories of moral philosophy — consequentialism, deontology and virtue ethics — only consequentialism defines morality in terms of “the greater good”.

  39. Blas,

    I’m still interested in your response to this scenario:

    4. William says we should conform to God’s morality because he is our creator, but this argument is bogus. Suppose that humans eventually learn how to create universes. If some pimply-faced teenager creates a universe in his basement, are its inhabitants morally obligated to obey him merely because he is their creator?

  40. I’m not convinced. Give me some concrete situations where different philosophies would lead to different behavior, assuming one could see the future.

  41. The classic example is lying. Under Kant’s deontological morality, lying is wrong at all times and in all circumstances, even if the Nazis are at the door and Anne Frank is in the attic.

  42. Is that an abstract position, or do lots of people believe lying is wrong at all times an places? Has any notable person put that into practice and recorded the results?

  43. I found it and noted that at the end of my post.

    I find it interesting to juxtapose the notion that morality can be based on motive with the observation that our innermost being can be corrupted.

    All the more reason that morality is societal and evolutionary rather than derived from first principles.

    One can. of course, stamp ones feet and insist the first principles exist, but I don’t know of any widespread agreement among people as there is in the principles of arithmetic, for example.

  44. keiths:
    Blas,

    I’m still interested in your response to this scenario:

    I answered at least two times. If God exists it is, was and will be only one. He will not change.

  45. petrushka:

    I get the feeling, though,thst you have some list of behaviors that you think can be derived from theistic moralitybut not from non-theistic morality.

    Off course, if God exists, he created us with a purpose. So we “ought to” do what let us reach that purpose. Morality is all about what we “ought to” in “order to …”.

  46. Religion does not reflect a unified and unchangeable god. So these unchangeable aspects of god are not accessible to humans..

Leave a Reply