What would Darwin do?

At Evolution News and Views, David Klinghoffer presents a challenge:

Man needs meaning. We crave it, especially when faced with adversity. I challenge any Darwinist readers to write some comments down that would be suitable, not laughable, in the context of speaking to people who have lived through an event like Monday’s bombing. By all means, let me know what you come up with.

Leaving aside Klinghoffer’s conflation of “Darwinism” with atheism, and reading it as a challenge for those of us who do not believe in a supernatural deity or an afterlife (which would include me), and despite lacking the eloquence of the speakers Klinghoffer refers to, let me offer some thoughts, not on Monday’s bombing, specifically, but on violent death in general, which probably touches us all, at some time.  Too many lives end far too soon:

We have one life, and it is precious, and the lives of those we love are more precious to us than our own.  Even timely death leaves a void in the lives of those left, but the gap left by violent death is ragged, the raw end of hopes and plans and dreams and possibilities.  Death is the end of options, and violent death is the smashing of those options;  Death itself has no meaning. But our lives and actions have meaning.  We mean things, we do things, we act with intention, and our acts ripple onwards, changing the courses of other lives, as our lives are changed in return.  And more powerful than the ripples of evil acts are acts of love, kindness, generosity, and imagination. Like the butterfly in Peking that can cause a hurricane in New York, a child’s smile can outlive us all. Good acts are not undone by death, even violent death. We have one life, and it is precious, and no act of violence can destroy its worth.

823 thoughts on “What would Darwin do?

  1. William J. Murray:
    Lizzie on April 26, 2013 at 2:03 pm said, in an attempt to paraphrase William J. Murray:

    William J. Murray on April 26, 2013 at 3:26 pm explicitly stated in response to this mistaken characterization:

    Your current paraphrase, not even a week later, completely reverses the “cart, horse” order of “my stance”, as you call it, yet again. After asking you to stop trying to paraphrase me, and being cajoled by others to explain why your paraphrasing is erroneous, I went ahead and gave you an extensive, numbered system of the steps I take, and in what order, to arrive at my moral code to correct the very “paraphrasing error” you repeat here.

    I don’t think the ordering of my points is crucial. The argument seems to me to be essentially commutative.

    I shall re]para]phrase, by reordering most of what I originally wrote:

    William rejects the value of a moral code that is not rationally coherent [omit part about why Darwinian morality is not rationally coherent].

    The only system under which we would have a [rational, coherent moral code would be if there were a system whereby there actually were ultimate consequences for ignoring our innate sense for what is immoral. In other words, if there were an absolute morality.

    It makes sense to think of such a system as a God who created a) a system with such consequences.and b) me, with an innate sense of those ultimate consequences.

    Under such a supposition, because I am a rational being, my perception that it would be wrong to do something that would give me immediate pleasure must derive from an innate sense of morality (i.e. of the ultimate consequences to me of such an act).

    My innate sense tells me that ultimately torturing babies would do me harm. I can use this as the starting point for a moral code that tells me what absolute morality consists of.

    Am I closer?

    Come on, William, you need to work on your cult leadership skills!

  2. Am I closer?

    To what? You didn’t paraphrase my position, you just arranged what I said in a manner convenient to your position and argument – for the 3rd time in a single thread.

  3. OMagain:
    Whomever your deity defines as a monster, a monster they be.

    Ontologic proof of existance of God.

  4. William J. Murray: To what? You didn’t paraphrase my position, you just arranged what I said in a manner convenient to your position and argument – for the 3rd time in a single thread.

    William, I am trying to understand you!

    I’m not stating a “position” or making an argument. I’m simply trying to understand yours.

  5. petrushka:
    I would eny that a fully rational morality is possible at all. The prospects are not improved by assuming a creating entity. One merely pushes the origin problem back a generation.

    Yes, assuming a creating entity do not solve the problem, you need a real creating enty that gives you the moral rule. If that God exists is the only way to have a ratinale founded moral.

  6. Blas, you had a helpful comment upthread I’ve been meaning to get back to ! Will do so later today I hope.

  7. Blas: Yes, assuming a creating entity do not solve the problem, you need a real creating enty that gives you the moral rule. If that God exists is the only way to have a ratinale founded moral.

    Okay. Assuming that for the sake of argument, what follows?

  8. William, the majority of people here have an excellent command of English.

    If they appear repeatedly to misunderstand your posts, might the problem be with, well, your posts?

    Or do you perhaps want to be able to claim that no-one is clever enough to understand you?

  9. damitall2:
    William, the majority of people here have an excellent command of English.

    If they appear repeatedly to misunderstand your posts, might the problem be with, well, your posts?

    Or do you perhaps want to be able to claim that no-one is clever enough to understand you?

    I have repeatedly and explicitly, in this very thread, stated flat out what I think the “understanding” problem is, and it has absolutely nothing to do with “cleverness” or “intelligence” or “command of English”.

  10. Lack of trying isn’t the problem. Fundamentally oppositional conceptual/computational frameworks is the problem.

  11. William J. Murray: I have repeatedly and explicitly, in this very thread, stated flat out what I think the “understanding” problem is, and it has absolutely nothing to do with “cleverness” or “intelligence” or “command of English”.

    I know you have stated what you think the problem is. But perhaps you aren’t correct? Perhaps the problem is that you aren’t being very clear?

  12. Apparently in order to follow WJM’s rational argument one must first accept an irrational premise. At least one.

    How about this? To have a rationally founded air traffic control system there must be a creator god, etc.

  13. Thing is, it’s important to check that one hasn’t caught oneself in a closed loop.

    I suggest that William has done so:

    • If people don’t understand his argument, he concludes that they are incapable of understanding it.
    • If they think they have understood his argument and disagree with it, he concludes that they haven’t understood his argument.

    So let me try to cut the Gordian loop:

    William, what would you see as legitimate arguments against your position?

    Or do you find it flawless?

  14. WJM’s fallback position is a familiar one: that “materialists” have such an a priori blinkered mindset as to be incapable of ‘true’ comprehension. He is trying to explain to us ‘automata’ why he isn’t one. Naturally, because he has all the trappings of being the same species as us, we are unable to perceive the means by which this can be achieved (as distinct from being believed to have been achieved by said self-styled ‘non-automaton’).

  15. Illustrative examples of rational morality in action would be welcomed. Preferably examples where all god believers arrive at different prescriptions for behavior than non-believers.

  16. Blas,

    Yes, assuming a creating entity do not solve the problem, you need a real creating enty that gives you the moral rule. If that God exists is the only way to have a ratinale founded moral.

    I addressed this in my last comment:

    4. William says we should conform to God’s morality because he is our creator, but this argument is bogus. Suppose that humans eventually learn how to create universes. If some pimply-faced teenager creates a universe in his basement, are its inhabitants morally obligated to obey him merely because he is their creator?

  17. petrushka: Okay. Assuming that for the sake of argument, what follows?

    Check if that God exist. If not forget morality.

  18. petrushka:
    Illustrative examples of rational morality in action would be welcomed. Preferably examples where all god believers arrive at different prescriptions for behavior than non-believers.

    That is impossible because not all believers believe the same and not all non-believers behave the same. You have to be more specific.

  19. keiths:
    Blas,

    I addressed this in my last comment:

    I addreses that observation before too.

    “Well, you have to start with not hinking that God, if it exists, is somebody that one days wakes up and says I´m bored the 2+2= 4 for today I´ll make 2+2=5. God is eternal not because he do not dies and stands all the time but because he is outside the time. He created the time. And if it exists, He is goodness do not look another definition on the Bible. And moral is better defined with conform our purpose that is the one God thought for us.”

  20. Blas: Check if that God exist. If not forget morality.

    I think that absolute morality has never been a useful concept, so I’m not particularly sympathetic to your concern. the fact that your very next post you anchor the details to particular beliefs of individuals places absolute morality on the same footing as the weakest versions of relativistic morality.

    Even worse, actually, since religions have historically used differences to justify war. Consensus morality creates a lot of shouting an politicking, but doesn’t justify war.

    Wars still happen, but it is obvious they are about power and not about morality.

  21. None of that supports your claim that we are obligated to obey God because he created us and gave us moral rules.

    In the case I laid out, do you think the inhabitants of the created universe are morally obligated to obey their teenage creator? If yes, on what basis? If no, then you are conceding that your claim is wrong.

  22. I guess our problem revolves around disputed premises:

    1. God exists
    2. God has moral attributes or teachings
    3. WJM and Blas are privy to these attributes.

    I hope you realize that rationality is pretty useless unless all parties accept the premises.

    Differences in the perception of god and his moral attributes is equivalent to differences in personal differences in moral sentiment.

    Whether people differ in their moral sense for arbitrary reasons or due to different perceptions of god has no practical consequences. Morality is still negotiated.

  23. Liz said:

    If people don’t understand his argument, he concludes that they are incapable of understanding it.

    If they think they have understood his argument and disagree with it, he concludes that they haven’t understood his argument.

    You could say this about virtually anyone in just about any argument in just about any forum on the internet.

  24. That’s item #1 on my long list of criticisms:

    1. William complains that under “Darwinian morality”, moral disputes can’t be conclusively resolved because the justification for a moral decision boils down to “because I, personally, think so.” He doesn’t seem to realize that moral disagreements can’t be adjudicated under his system, either, because the justification for a moral decision boils down to “because I, personally, think this is what God wants.”

  25. Rationality is overrated. Useless until agreement is reached on premises.

    I can’t help but think that WJM and Blas are using the “necessity” of god as an argument to believe in god.

    WJM in particular seems to be suggesting that nonbelievers are somewhat subhuman.

    I know that’s not what he literally says, but there’s an undercurrent.

  26. William J. Murray:
    Liz said:

    If people don’t understand his argument, he concludes that they are incapable of understanding it.

    If they think they have understood his argument and disagree with it, he concludes that they haven’t understood his argument.

    You could say this about virtually anyone in just about any argument in just about any forum on the internet.

    Well, to some extent it’s an inevitable byproduct of sincerity. As a friend of mine used to have in his sig: Of course I think I’m right – if I thought I was wrong, I’d change my mind!

    But that is a little different from getting yourself into a closed loop that excludes the possibility that you might be mistaken (hence the strapline of this blog) – if you find yourself assuming that all people who disagree with you are “biological automatons” incapable of understanding your argument, then you render yourself invulnerable to any potentially legitimate criticism. Same with conspiracy theories – when people think that the evidence advanced by scientists that suggest climate change (or evolution!) is merely result of a conspiracy to keep the Truth from the People, they become unreachable, whether the scientists are right or wrong.

    And one thing I’ve learned, over my years of pontificating and being pontificated to on the internet, is that just occasionally you read an argument, or a counter-argument, that makes you go: whoa! That’s actually a good point! I’d been misunderstanding that up till now – Now I think I get it! It’s happened to me quite a few times, the most shattering example being when I lost what seems very like the argument you are currently making (or at least a relation).

    And I still miss my God.

    We all think we are right; but that is not the same thing as thinking that we cannot be wrong.

  27. petrushka:
    Rationality is overrated. Useless until agreement is reached on premises.

    I can’t help but think that WJM and Blas are using the “necessity” of god as an argument to believe in god.

    WJM in particular seems to be suggesting that nonbelievers are somewhat subhuman.

    I know that’s not what he literally says, but there’s an undercurrent.

    Oh, he does, literally say it. He thinks that many people do not have free will, but some do. That’s not an undercurrent – that’s clear.

    And in another neck of the woods, Douglas Hofstadter says something similar, except that temperamentally he seems inclined to include as many beings as possible in the free will (or “large-souled”) side of a grey line. Not mosquitoes or tomatoes, but maybe the animals that some of us eat (on purpose, that is – I expect we all eat mosquitoes occasionally by accident, I certainly do cycling home along the river in the May evening…).

  28. WJM writes well and does appear to read and consider opposing arguments. Good things.

    Without implying he is wrong, I think he is impatient and unwilling to look for alternate ways to express his ideas.

    One of the asymmetries in this debate is the extent to which each side is willing to seek alternate paths to understanding.

    I’m sure it’s annoying to ID proponents to be told over and over and over that they don’t understand evolution. I’ve seen that argument, and I can see why it isn’t effective. But it isn’t the only argument made for evolution. It isn’t even the most common.

    After you have said that your opponent has it wrong. there’s some obligation to restate what is right, in different words, if necessary\y, and using different metaphors.

  29. petrushka:
    WJM writes well and does appear to read and consider opposing arguments. Good things.

    Without implying he is wrong, I think he is impatient and unwilling to look for alternate ways to express his ideas.

    One of the asymmetries in this debate is the extent to which each side is willing to seek alternate paths to understanding.

    I’m sure it’s annoying to ID proponents to be told over and over and over that they don’t understand evolution. I’ve seen that argument, and I can see why it isn’t effective. But it isn’t the only argument made for evolution. It isn’t even the most common.

    After you have said that your opponent has it wrong. there’s some obligation to restate what is right, in different words, if necessary\y, and using different metaphors.

    I guess maybe I am smarting from being assumed to be a “biological automaton” 🙂

    Well, not really. But I should say that I do appreciate William’s continued efforts here. But I’m afraid I still (apparently) don’t know what he is trying to say. Every time I seem to see a line of sense in it, turns out that That’s Not It.

    And so I am still at sea regarding why he things that atheists can’t have a rational morality.

  30. Rationality is overrated. Useless until agreement is reached on premises.

    I disagree. Rationality can actually help people come to an agreement on premises, as when person A shows that person B’s premises lead inferentially to an absurdity.

  31. Lizzie: Oh, he does, literally say it.He thinks that many people do not have free will, but some do.That’s not an undercurrent – that’s clear.

    And in another neck of the woods, Douglas Hofstadter says something similar, except that temperamentally he seems inclined to include as many beings as possible in the free will (or “large-souled”) side of a grey line.Not mosquitoes or tomatoes, but maybe the animals that some of us eat (on purpose, that is – I expect we all eat mosquitoes occasionally by accident, I certainly do cycling home along the river in the May evening…).

    There’s not much I agree with William on, but I do understand his point that Free Will may be a contingent property in people. There are an uncomfortably large number who do not consider the motivations or roots of their action, who simply react emotionally, and too often predictably.

    The problem with that observation for William is that it’s not a demonstration that some people are subhuman — it’s that the automatons are themselves human, and Free Will is simply not the “you have it or you don’t” trait that he seems to think it is. Some of those same automatons, for example, can in a quiet moment suddenly realize the irrationality of their behaviors, and decide to change them. We’ll never know, though, if our own experience with them has been in a limited number of events prior to their revelations — or after they’ve forgotten them and returned to their knee-jerk habits.

    It works the other way, too. Some of those who pride themselves on their advanced levels of rationality and free will can have some very annoying and predictable reactionary traits that they have great difficulty overcoming, and unless they really make a heroic attempt, sink into knee-jerk behaviors of their own.

    So what it comes down to, is that most people DO have Free Will — sometimes. And most people also seem to be lacking in it — sometimes.

    And of course, those who are most likely to behave like automatons most of the time, are also those most likely to be unquestioning and defensive believers.

  32. Yes, I more or less agree with this. Which is why I was not, in fact, offended (though frustrated!) at William’s comment about automatons.

    I tend to think in terms of Degrees of Freedom. I think most of us have a substantial number 🙂

  33. keiths:
    None of that supports your claim that we are obligated to obey God because he created us and gave us moral rules.

    In the case I laid out, do you think the inhabitants of the created universe are morally obligated to obey their teenage creator?If yes, on what basis?If no, then you are conceding that your claim is wrong.

    Why you insist with a teehager creator? A teenager creator it is not God. It is so hard to understand.

  34. 1. William complains that under “Darwinian morality”, moral disputes can’t be conclusively resolved because the justification for a moral decision boils down to “because I, personally, think so.” He doesn’t seem to realize that moral disagreements can’t be adjudicated under his system, either, because the justification for a moral decision boils down to “because I, personally, think this is what God wants.”

    When “because I think so” is assumed to be your fundamental source of morality, there is no reason to believe that what you think may be wrong – there is nothing to measure its wrongness; because you think it defines it as “right”.

    If one agrees that whatever they think may be wrong, and that there is something objective (absolute) by which they could correct their improper thinking, then two people who disagree have a basis for reasonable humility and skepticism about their personal views, and a rational basis by which to pursue rational resolution of their disagreement.

    keiths “because I think so” argument can be used on anything – in science, etc. In science, people assume the governorship of laws or principles of material interaction to be valid at all times, everywhere, and by this agreement of an objective (absolute) arbiter of their theories they have means by which to reach rational agreement. However, at the end of the day, it is still just a “because I think so” proposition, and cannot be any other, because all such considerations – even experimental results – reside in thought, and can be said to be held “because I think so”.

    What keiths continues to do is evaluate the proposition of an absolute morality from the lens that no such thing exists – IOW, that it can only be “because I think so” and not, in a sense analogous to experiencing gravity, “because we agree to X as being a fundamental commodity of an objectively-existent phenomena” (self-evident moral truth) and work from there.

    keiths might argue that there is no way to scientifically measure that which they agree upon is a self-evident truth, but I never claimed that morality lied in the domain of science to be able to measure or codify.

  35. Where have I literally said that people without free will are subhuman?

  36. petrushka:
    I guess our problem revolves around disputed premises:

    1. God exists
    2. God has moral attributes or teachings
    3. WJM and Blas are privy to these attributes.

    I hope you realize that rationality is pretty useless unless all parties accept the premises.

    Differences in the perception of god and his moral attributes is equivalent to differences in personal differences in moral sentiment.

    Agree. The personal question is if God really exists or not.

    petrushka:
    Whether people differ in their moral sense for arbitrary reasons or due to different perceptions of god has no practical consequences. Morality is still negotiated.

    I´m not sure. A morality founded in arbitrary reasons without no other consequences than legal consequences it is not the same than a morality based on a believe that your acts are not going to escape to reward or punishment. On the other hand arbitrary reasons make a morality always irrational and not founded one reason based on a god, make the morality rational and founded what maybe false are irrational could be god or the believe that this god exists.

  37. petrushka:

    I can’t help but think that WJM and Blas are using the “necessity” of god as an argument to believe in god.

    I already said no. Do not include me in this.

  38. keiths: I disagree.Rationality can actually help people come to an agreement on premises, as when person A shows that person B’s premises lead inferentially to an absurdity.

    In the case of premises leading to an absurdity you do not need agreement. You need agreement when all the premises lead to different coherent systems like morality or euclidian and not euclidean geometry.

  39. Lizzie: I guess maybe I am smarting from being assumed to be a “biological automaton”

    And so I am still at sea regarding why he things that atheists can’t have a rational morality.

    Maybe because we are still waitig your rationale morality.

  40. WJM:

    When “because I think so” is assumed to be your fundamental source of morality, there is no reason to believe that what you think may be wrong – there is nothing to measure its wrongness; because you think it defines it as “right”.

    I doubt that anyone defines things as right because they think them – they think them because they consider them to be right. But that does not make them right. I’ve been wrong about lots of stuff, and changed my mind, and there is plenty of room for mind-changing on the question of God and Objective Morality. But when you talk of ‘skepticism and humility’, little of this comes across in your own writings.

    I think you fundamentally misunderstand how morality is understood by a ‘subjective moralist’. This daft phrase “Darwinist morality” keeps cropping up, and references to ‘maximising one’s reproductive output’, as if accepting evolution means that you sift everything in its light. You don’t, any more than accepting electromagnetic theory makes you sticky. But it does provide a rational explanation of the moral sense.

    Evolution acts to maximise reproductive output. It does so by making men like girls, and girls like men, and people feed themselves when hungry, avoid danger, and so on. And, as a social species, it generates the opportunity for similar ‘reward-penalty’ sensations that improve social cohesion. The descendants of individuals with that social sense survived, where those of sociopaths tended to disappear. Sociality, in this species, was adaptive. It does not create individuals whose sole waking thought is ‘must maximise my reproductive output’. We are the same species, and share mostly the same genes. It is likely that you and I experience much the same in what strikes us as ‘good’ and ‘bad’. You objectify that, seeing as its source shared connection with The Divine, and I don’t – I see its source as shared genetic heritage, with extensive cultural overlay. It’s not people deciding “today, I think I’ll try a spot of baby-torture”.

  41. Blas: Maybe because we are still waitig your rationale morality.

    Well, I’ll give it again, but I have done so already. It’s based, I guess, on the observation (and indeed logic – as in game theory) that a community that holds a set of socially or legally implemented “altruism” rules (“do as you would be done by”) on balance suits everyone, even those who don’t actually care about anyone else, provided that we have a good way for detecting cheaters and reducing their access to the benefits of the system.

    It’s so rational, you can even set it up as a logic circuit 🙂

  42. William J. Murray:
    Where have I literally said that people without free will are subhuman?

    You have called them/us “automata”. As most people consider free will a human characteristic, and do not consider automata human, the implication is fairly strong.

    I don’t think it’s a terrible thing to say (as long as you don’t propose denying human rights to automata, which I don’t expect you do!), and to a certain extent I agree with you, except that wouldn’t draw a hard-and-fast line (I think freedom is best considered on a continuum), and I’d put a lot of non-human animals quite high in the range. On the other hand, I’d put a human conceptus pretty far down.

  43. Lizzie: Well, I’ll give it again, but I have done so already.

    Let me check your rationality

    Lizzie
    It’s based, I guess,

    Guessing is a bad start for rationlity.

    Lizzie
    on the observation (and indeed logic – as in game theory) that

    Observation by whom of what? That is a rationale argument?

    Lizzie
    a community that holds a set of socially or legally implemented “altruism” rules (“do as you would be done by”)

    Social and legal rules are not morality. I do not know any society that has implemented the rule “do as you would be done by”, could you give an example.

    Lizzie
    on balance suits everyone,

    On balance means on average, so you have ecluded from your “suits everyone” then the rule “do as you would be done by”, is not for everyone and you have an inconsistency in your rationale.

    Lizzie
    even those who don’t actually care about anyone else, provided that we have a good way for detecting cheaters and reducing their access to the benefits of the system.

    So morality
    make sense only if cheaters can be caught and punished, something that we know happen only in California.
    Lizzie
    It’s so rational, you can even set it up as a logic circuit

  44. William, what would you see as legitimate arguments against your position?

    Well, that’s at least an unexpected question. Why not just ask me to disprove my own argument?

    In fact, I like that challenge better. Here, I will disprove my own argument (that only theism can offer a rationally consistent, meaningful moral system):

    The easiest part of my position to defeat is, of course, the “meaningful” part of it. My argument here is banking on my opinion that those I am arguing against would not find a morality meaningful that was entirely relative as far as goals were concerned, entirely haphazard and usually entrenched in corruption as far as consequences were concerned, and that required certain relativistic principles they would find entirely unpalatable.

    I, myself, found an admitted relativistic morality quite meaningful for many years. Liz would know this, had she actually comprehended the book I wrote that she says she read.

    Someone in this thread actually made a case about them personally finding relative morality meaningful even if it was not applicable to those who disagreed, and I admitted that the “meaningful” portion of my argument would not be applicable to them.

    All it really takes to defeat this part of my argument is to agree one’s view is relative (because I/we think so) and still sufficiently meaningful to them personally; this part of my argument bets on people not being willing to agree that “because I think so” is meaningful (palatable, really) as a moral basis – even though it was for me for years.

    I knew this part of my argument could be rather easily defeated; I just bet that most people here wouldn’t be willing to concede ground they had no right to in order to defeat it, like when Liz attempted to define morality as “altruism”, which she employed to make her particular “because I say so” flavor of morality seem to be more than it was. I’m not claiming she did so deliberately; most people who pick a particular kind of relative morality to live by do the same thing, say “this is best because …” and then give reasons that assume the consequent.

    The “rationally coherent” part of my argument depends upon the problem of competing and conflicting fundamental definitions of “morality”, and how they are resolved. Any morality can be internally consistent, but one cannot rationally apply what they agree is a relative standard in a way that judges behavior engaged in under a different relativistic standard. It’s just not possible. The rulers my kingdom uses internally cannot be rationally applied to count the rulers used by another kingdom as “wrong”, when we use two different systems of measurement we both call “inches”, and systems by which we measure distance are entirely different.

    The question is, by what moral principle can one call the other system “wrong”? If on extends the principle of “because I/we think so” (relative morality) to other moral groups, they have as much right to their moral system as you do to yours, even if they directly contradict each other. You have no right to call anything they do “wrong”, or ‘immoral”, because it is legitimately moral by the same principle that authorizes your morality – “because I/we think so”.

    Conceptually, the two conflicting moralities are equal, both legitimized by “because I/we think so”. There is only one relativistic moral concept can serve as a legitimate basis for resolving the difference between conflicting relative moralities (that I have been able to find, anyway), and that is: because I/we can.

    If group A has a morality based on altrusim, and group B has a morality based on the weak serving the needs of the strong, and both agree that their moralities are not absolute, but are relative; then rationally, neither can say that the other group has in immoral morality, because they accept that morality is a subjective commodity. Both may say that the other group is immoral only by their own internal, subjective judgement.

    Now, one can have the relative moral position that all people – even those outside of their group – should be interacted with under the auspices of their moral code, even if one doesn’t hold that moral code as absolute. But, how to resolve any significant conflicts between the two moralities? There is no rational, conceptual argument to be made because there is no assumed common ground to resolve such differences; neither group claims that their views reflect any absolute morality based upon self-evident truths that might be agreed upon. If both groups hold that one ought treat those outside of their group as if they were inside, one can easily see the problem when our two competing groups run into each other.

    One way or another, in one form or another, as physical force, rhetorical manipulation, conniving deception, or majority rules – the only principle that can be rationally referred to in order to resolve conflicts between conceptually equal and relative (because I say so) moralities is the principle of “because I/we can”. IOW, group A is not judging Group B “immoral”, but rather Group A determines the moral “right” of the two competing views – including their own – via the subjectively held principle of “because I/we can”.

    So, if one agrees that the only moral right they can refer to when adjudicating moral issues with/upon those that disagree with any fundamental conceptual aspect of their morality is “because I/we can” (might gives right, if not might makes right), then they have a rationally coherent subjective morality – if you can stomach calling “might makes right” moral.

    Or, if they agree that their moral system does not apply to anyone that fundamentally disagrees, and allows them to do whatever their own relative morality says without judgement or coercion, then they have a rationally coherent subjective morality – if you can stomach an “anything goes” morality.

    So yes, it is possible to have a rationally coherent, meaningful, relative (without god) morality; my argument was just banking on no one here having the guts (or capacity) to go there in order to rebut my argument. Many other Darwinistic (atheistic materalists) philosophers in history were/are willing to go there. I should know; I was one of them.

  45. I don’t consider free will a human characteristic, but rather a characteristic some humans have. You might as well say I consider people with blue eyes subhuman.

    In any event, I don’t think you can point to anywhere where I literally said what you claim.

  46. William J. Murray:
    I don’t consider free will a human characteristic, but rather a characteristic some humans have. You might as well say I consider people with blue eyes subhuman.

    OK. You claim, however, that humans come in two grades: automata and those with free will?

  47. I make no such claim. I hold it as a reasonable perspective only for the purpose of modifying my own behavior and reactions in a positive, moral way, not because I can support it (as all claims must be) by any significant evidence.

    All of my opinions and beliefs are not claims I make about the actual nature of things. I believe things because I wish to, not because I claim them to be true or actual. Some of my beliefs I can support, but I do not believe them because I can support them.

  48. William J. Murray: Well, that’s at least an unexpected question.Why not just ask me to disprove my own argument?

    Well, I often find it a useful exercise. In fact, it’s how science proceeds – you make a finding, then you spend a considerable time trying to make holes in it. Sometimes you succeed 🙂

    In fact, I like that challenge better.Here, I will disprove my own argument (that only theism can offer a rationally consistent, meaningful moral system):

    Cool.

    The easiest part of my position to defeat is, of course, the “meaningful” part of it. My argument here is banking on my opinion that those I am arguing against would not find a morality meaningful that was entirely relative as far as goals were concerned, entirely haphazard and usually entrenched in corruption as far as consequences were concerned, and that required certain relativistic principles they would find entirely unpalatable.

    OK. So a “meaningful” a meaningful morality is one that is not relative, haphazard or corrupt, where “relative” means: anyone can figure out their own morality. Fair enough.

    I, myself, found an admitted relativistic morality quite meaningful for many years. Liz would know this, had she actually comprehended the book I wrote that she says she read.

    I see courtesy doesn’t form part of your moral code, William 🙂 Yes, I have read your book (I don’t merely “say” I have read it). I didn’t comprehend it, as I have told you, so you needn’t leave that at my door like a dead mouse. But I do understand that for many years you had what you describe as a “relativistic morality”, and that you found it “meaningful”. However, I was still in the dark as to what you meant by “relativistic morality”. I think you have now clarified this.

    Someone in this thread actually made a case about them personally finding relative morality meaningful even if it was not applicable to those who disagreed, and I admitted that the “meaningful” portion of my argument would not be applicable to them.

    I think it might have been Petrushka, and I might have agreed. But whether I do or not really depends on what the term is intended to mean. And I certainly think that ethics (what is moral in a given circumstance) are not “absolute” – sometimes it’s right to kill, sometimes it isn’t. But I hope we have dealt with that already, because I think you agree that this is the case. Perhaps you don’t. However, below, you define “relative” as “because I/we think so”. I do agree that a morality that is entirely self-chosen (e.g. someone who thinks that everything is OK apart from killing hedgehogs has as valid a morality as someone who thinks that torturing babies for personal pleasure is our sacred duty) is meaningless.

    All it really takes to defeat this part of my argument is to agree one’s view is relative (because I/we think so) and still sufficiently meaningful to them personally; this part of my argument bets on people not being willing to agree that “because I think so” is meaningful (palatable, really) as a moral basis – even though it was for me for years.

    As I say above, I agree that “morality is anything you think it is” makes morality a meaningless concept. Good.

    I knew this part of my argument could be rather easily defeated; I just bet that most people here wouldn’t be willing to concede ground they had no right to in order to defeat it, like when Liz attempted to define morality as “altruism”, which she employed to make her particular “because I say so” flavor of morality seem to be more than it was. I’m not claiming she did so deliberately; most people who pick a particular kind of relative morality to live by do the same thing, say “this is best because …” and then give reasons that assume the consequent.

    I disagree that I assumed the consequent. More on that below. For now, let me just comment that I don’t think anybody here has expressed the view that a pick-your-own morality is meaningful.

    The “rationally coherent” part of my argument depends upon the problem of competing and conflicting fundamental definitions of “morality”, and how they are resolved. Any morality can be internally consistent, but one cannot rationally apply what they agree is a relative standard in a way that judges behavior engaged in under a different relativistic standard. It’s just not possible.The rulers my kingdom uses internally cannot be rationally applied to count the rulers used by another kingdom as “wrong”, when we use two different systems of measurement we both call “inches”, and systems by which we measure distance are entirely different.

    I would agree. Which is why it would be absurd to call sharks “immoral” for not behaving altruistically, for instance. Or to condemn lions for not being vegetarians. Or orcas for torturing baby seals. The issue is whether there are independently derivable (i.e. not pick-your-own, haphazard, or corrupt) grounds for saying that some human moral codes are better than other human moral codes. I think there are.

    The question is, by what moral principle can one call the other system “wrong”? If on extends the principle of “because I/we think so” (relative morality) to other moral groups, they have as much right to their moral system as you do toyours, even if they directly contradict each other. You have no right to call anything they do “wrong”, or ‘immoral”, because it is legitimately moral by the same principle that authorizes your morality – “because I/we think so”.

    Well, that would be true if moral codes were entirely arbitrary – if they had no rational foundation. However, if moral codes for human societies can be arrived at independently, then they are not arbitrary/haphazard; there would then be grounds for saying that a person is behaving immorally, even if they personally thought (or their society thought) it was their moral duty to torture babies for personal pleasure.

    Conceptually, the two conflicting moralities are equal, both legitimized by “because I/we think so”. There is only one relativistic moral concept can serve as a legitimate basis for resolving the difference between conflicting relative moralities (that I have been able to find, anyway), and that is: because I/we can.

    So far you have mounted a good argument that if we can make morality mean anything we personally want to, then we don’t have a meaningful morality. However, you have not yet, IMO, demosntrated that it is not possible for human beings collectively to construct a non-arbitrary morality. My position is that it is.

    If group A has a morality based on altrusim, and group B has a morality based on the weak serving the needs of the strong, and both agree that their moralities are not absolute, but are relative; then rationally, neither can say that the other group has in immoral morality, because they accept that morality is a subjective commodity. Both may say that the other group is immoral only by their own internal, subjective judgement.

    OK, that’s nice and clear. But I suggest there is a profound difference in kind between those two “moralities”. More below.

    Now, one can have the relative moral position that all people – even those outside of their group – should be interacted with under the auspices of their moral code, even if one doesn’t hold that moral code as absolute.But, how to resolve any significant conflicts between the two moralities?There is no rational, conceptual argument to be made because there is no assumed common ground to resolve such differences; neither group claims that their views reflect any absolute morality based upon self-evident truths that might be agreed upon.If both groups hold that one ought treat those outside of their group as if they were inside, one can easily see the problem when our two competing groups runinto each other.

    And I think this is the crux of the issue: are there any self-evident truths that both could agree on, and which would establish one as a more rational, meaningful, superior, whatever, morality than the other? I think there is.

    One way or another, in one form or another, as physical force, rhetorical manipulation, conniving deception, or majority rules – the only principle that can be rationally referred to in order to resolve conflicts between conceptually equal and relative (because I say so) moralities is the principle of “because I/we can”. IOW, group A is not judging Group B “immoral”, but rather Group A determines the moral “right” of the two competing views – including their own – via the subjectively held principle of “because I/we can”.

    So, if one agrees that the only moral right they can refer to when adjudicating moral issues with/upon those that disagree with any fundamental conceptual aspect of their morality is “because I/we can” (might gives right, if not might makes right), then they have a rationally coherent subjective morality –ifyou can stomach calling “might makes right” moral.

    Well, stomach aside, I disagree with your “if” premise here.

    Or, if they agree that their moral system does not apply to anyone that fundamentally disagrees, and allows them to do whatever their own relative morality says without judgement or coercion, then they have a rationally coherent subjective morality – if you can stomach an “anything goes” morality.

    Again, stomach aside, I am happy to stipulate that a pick-your-own/anything goes morality isn’t worthy of the term.

    So yes, it is possible to have a rationally coherent, meaningful, relative (without god) morality; my argument was just banking on no one here having the guts (or capacity) to go there in order to rebut my argument.Many other Darwinistic (atheistic materalists) philosophers in history were/are willing to go there.I should know; I was one of them.

    OK. So the one exception, as I understand it, to your rule that the only rationally coherent morality is one based on the assumption of a god-given absolute system of consequences for immoral behaviour, would be a morality that says: the judge is the one with the power to enforce the judgement (“might makes right”).

    I’ll respond further in a new comment (maybe a new thread, as this one is getting rather long!)

  49. William J. Murray:
    I make no such claim.I hold it as a reasonable perspective only for the purpose of modifying my own behavior and reactions in a positive, moral way, not because I can support it (as all claims must be) by any significant evidence.

    OK, let me rephrase: you make the working assumption that there are two classes of human being: free-will owners and biological automata. even though you have no evidence to support your model?

    All of my opinions and beliefs are not claims I make about the actual nature of things. I believe things because I wish to, not because I claim them to be true or actual. Some of my beliefs I can support, but I do not believe them because I can support them.

    Oh, yes. I remember that from your book. It makes for confusion, though, because most of us believe things because we have reason to think they might be true.

    Although I’m quite a fan of normative models – behaving as though such a thing were true, because doing so enhances the chance that it will become so.

  50. Still nothing on how disputes are abitrated in favour of the ‘objective moralist’ – what allows them to avoid some kind of majority rule simply through believing that they are glimpsing, however feebly, God’s wishes?

    Consider a matrix. A population consists of exactly 25% each of Theists A and B, and two culturally distinct “Darwinistic” (FFS!) materialists C and D.

    A and B accept the existence of an objective (ie: external, ‘divine’) standard; C and D do not. How do their respective determinations of the source of their own and others’ moral wishes impact upon the ‘right’ of any group or coalition to impose upon any other? Particularly, how may any difference of opinion between the parties A and B be resolved? They both think they know the mind of God.

    Say A&B were both muslim sects and jointly declare Sharia Law. Is it just tough nuts to the disbelieving C/D’s – they have to go along because A/B have objective morality on their side; they themselves being mere automata and all?

Leave a Reply