Do Atheists Exist?

This post is to move a discussion from Sandbox(4) at Entropy’s request.

Over on the Sandbox(4) thread, fifthmonarchyman made two statements that I disagree with:

“I’ve argued repeatedly that humans are hardwired to believe in God.”

“Everyone knows that God exists….”

As my handle indicates, I prefer to lurk.  The novelty of being told that I don’t exist overcame my good sense, so I joined the conversation.

For the record, I am what is called a weak atheist or negative atheist.  The Wikipedia page describes my position reasonably well:

Negative atheism, also called weak atheism and soft atheism, is any type of atheism where a person does not believe in the existence of any deities but does not explicitly assert that there are none. Positive atheism, also called strong atheism and hard atheism, is the form of atheism that additionally asserts that no deities exist.”

I do exist, so fifthmonarchyman’s claims are disproved.  For some reason he doesn’t agree, hence this thread.

Added In Edit by Alan Fox 16.48 CET 11th January, 2018

This thread is designated as an extension of Noyau. This means only basic rules apply. The “good faith” rule, the “accusations of dishonesty” rule do not apply in this thread.

1,409 thoughts on “Do Atheists Exist?

  1. Neil,

    Who said anything about propositions? I’m just arguing that the cat believes the mouse is behind the door.

  2. keiths: Why on earth would you assume that beliefs must be based on language?

    A cat watches a mouse flee under a closed door and sticks her paws through the gap, trying to catch it. Would you seriously claim that the cat doesn’t believe that the mouse is on the other side of the door?

    If a belief is a propositional attitude, then only competent speakers of a natural language have beliefs. Hence animals don’t have beliefs.

    I’m not yet committing myself to the position that non-human animals don’t have beliefs. I’m exploring a line of thought — although I do think that the idea that semantic content depends on language is both true and has rather profound implications for philosophy of mind.

    If animals have beliefs, then those beliefs aren’t propositional attitudes. So what are they? How do we characterize them? Perhaps non-human animals don’t have beliefs but “aliefs“?

  3. You wrote it, Neil:

    If we have adequate definitions of “meaning” and “truth”, why isn’t AI already working near perfectly (in the sense of artificial persons)?

    If you’re having second thoughts and wish to retract it, just say so.

  4. keiths: Who said anything about propositions? I’m just arguing that the cat believes the mouse is behind the door.

    I’m not going to reject out of hand the idea that animals have beliefs which aren’t propositional attitudes. But if not, then what are they? What makes them beliefs and not some other mental state?

  5. KN,

    I’m not going to reject out of hand the idea that animals have beliefs which aren’t propositional attitudes. But if not, then what are they? What makes them beliefs and not some other mental state?

    I’d say it’s a matter of representation, not propositions. The cat’s mental representation of the world includes a mouse on the other side of the door. In other words, the cat believes there is a mouse there, and acts on that belief.

  6. Kantian Naturalist: I’m not yet committing myself to the position that non-human animals don’t have beliefs. I’m exploring a line of thought — although I do think that the idea that semantic content depends on language is both true and has rather profound implications for philosophy of mind.

    Of course, in the pile of mistakes in Plantinga’s bullshit, the problem of not even looking at life forms, and asking these basic questions, and thus at how cognitive faculties arise in evolutionary terms, is a big one. So, examining the issue of beliefs in animals is not a bad idea. We should not get lost in the details though. You can think of what they do as a more pragmatic approach …

    Later!

  7. keiths: I’d say it’s a matter of representation, not propositions. The cat’s mental representation of the world includes a mouse on the other side of the door. In other words, the cat believes there is a mouse there.

    Right. A more pragmatic approach.

  8. KN,

    You brought this up as a problem for Plantinga”s argument. Does he assert that beliefs can only be propositional?

  9. keiths: If you’re having second thoughts and wish to retract it, just say so.

    There’s nothing to retract.

    I asked a question that neither you nor fifth understood. I am not holding my breath waiting for answers.

    Just move on.

  10. keiths: I’d say it’s a matter of representation, not propositions. The cat’s mental representation of the world includes a mouse on the other side of the door. In other words, the cat believes there is a mouse there, and acts on that belief.

    And that would be fine if being a mental representation were not only necessary but also sufficient for being a belief.

    Here’s a way of seeing the problem, though: a state can count as a mental representation very easily if it consists merely of being a map-like functional isomorphism with some feature of the environment or of the animal in relation to the environment. But once we consider mental representations as being somehow map-like, there are a lot of problems.

    Maps can be more or less useful, but it’s hard to say that a map is true or false. A mental representation, if it is a map-like structure, can be more or less useful. That’s very different from how we ordinarily think of beliefs, because we ordinarily think of beliefs as having truth-value. We take beliefs to be true or false (or maybe indeterminate). But truth and falsity are different issues that don’t seem to apply easily to maps.

    So now there’s a new difficulty: if animals have beliefs, can those beliefs be true or false? If the cat’s mental representation of the mouse is a map-like structure of the relevant affordances — stalking, pouncing, and eating — then how could it have beliefs with truth-value, if maps don’t have truth-value? But if the cat’s mental representations don’t have truth-values, then how can they be beliefs?

  11. Neil,

    I understand your question…

    If we have adequate definitions of “meaning” and “truth”, why isn’t AI already working near perfectly (in the sense of artificial persons)?

    …but it’s based on a questionable assumption. Why would you expect adequate definitions of “meaning” and “truth” to lead to near-perfect AI?

    Can you defend that assumption?

  12. keiths: You brought this up as a problem for Plantinga”s argument. Does he assert that beliefs can only be propositional?

    He doesn’t explicitly assert it, but I think he’s committed to it because of how he tries to raise worries about how semantic content can have causal powers. I’m working off his discussion in Where the Conflict Really Lies. There’s a short discussion there where he argues that semantic content can’t have causal efficacy, so meaning ends up being epiphenomenal on the naturalist’s picture.

  13. keiths: Why would you expect adequate definitions of “meaning” and “truth” to lead to near-perfect AI?

    Can you defend that assumption?

    I usually understand Neil quite well (even where I disagree with him), but I’m lost on this point. Neil seems to be thinking that an adequate definition of a concept is the same thing as being able to design an algorithm that operationalizes that concept.

    It’s certainly true that being able to implement something through an algorithmic procedure is one measure of our understanding of it — if we can decompose something into a set of functions, and then design a code that will implement those functions on a ‘virtual machine’. But then again, we do have computer simulations of all sorts of things we don’t’ understand completely (such as the Earth’s climate).

  14. KN,

    And that would be fine if being a mental representation were not only necessary but also sufficient for being a belief.

    It isn’t sufficient, because counterfactual mental representations are possible that aren’t beliefs.

    More later.

  15. keiths: I understand your question…

    Not so. Otherwise you would not be demanding clarification.

    And you apparently did not understand this question, either:

    Why assume that the cat has a concept of “location” and a concept of “mouse”?

  16. keiths:
    You brought this up as a problem for Plantinga”s argument. Does he assert that beliefs can only be propositional?

    I’s not KN, but I think I understand where KN is going with this. Namely, that the way things evolve has a consequence about how our cognitive faculties evolved. That means that Plantinga’s problem here is that of talking in terms of beliefs and truth, forgetting that such might not be the right language, or else, that he has to allow for the role of language-less evolution of behaviour, before trying to tackle things in a human-loaded conceptual framework.

    Evolutionarily speaking, our cognitive faculties come from something simpler, and the simpler doesn’t deal with propositions as such. Our cognitive faculties sit on top of more foundational systems, and those systems deal with much more pragmatic shit. The feeling of pain goes directly into avoiding whatever might look potentially painful. No true/false values involved. This might “fool” animals, which is one reason why harmless life forms have evolved that look like the ones causing pain. Then some organisms evolve ways of not being fooled, etc. The point being, that simpler cognitive faculties might be fooled, but evolution leads to better and better “calls” if being fooled means dying of starvation, or if not been fooled allows for a wider food choice. This is the famous arms-race model. No true/false, no mistaking truth for some magical thing sticking to stuff. Still helping understand why, and when, we can trust that our cognitive faculties are making the right choice, such as judging that naturalism and evolution are pretty convincing shit. We didn’t inherit that from our genes, we’re making a judgement call based on experience, and we understand that our cognitive faculties are better off if we take into account more information.

  17. Entropy,

    I’s not KN, but I think I understand where KN is going with this. Namely, that the way things evolve has a consequence about how our cognitive faculties evolved. That means that Plantinga’s problem here is that of talking in terms of beliefs and truth, forgetting that such might not be the right language, or else, that he has to allow for the role of language-less evolution of behaviour, before trying to tackle things in a human-loaded conceptual framework.

    I understand the issue that KN is raising, but it’s a problem for Plantinga only if his argument depends on seeing beliefs as language-based propositions.

    Hence my question to KN:

    You brought this up as a problem for Plantinga”s argument. Does he assert that beliefs can only be propositional?

  18. keiths:

    I understand your question…

    Neil:

    Not so. Otherwise you would not be demanding clarification.

    I’m not demanding clarification. I’m asking you to justify your assumption. You know that, which is why you cut off my comment where you did. Here’s the full comment:

    I understand your question…

    If we have adequate definitions of “meaning” and “truth”, why isn’t AI already working near perfectly (in the sense of artificial persons)?

    …but it’s based on a questionable assumption. Why would you expect adequate definitions of “meaning” and “truth” to lead to near-perfect AI?

    Can you defend that assumption?

    KN sees the problem too:

    Neil seems to be thinking that an adequate definition of a concept is the same thing as being able to design an algorithm that operationalizes that concept.

    Can you justify your assumption?

  19. Neil:

    Why assume that the cat has a concept of “location” and a concept of “mouse”?

    I don’t. I explained my view:

    I’d say it’s a matter of representation, not propositions. The cat’s mental representation of the world includes a mouse on the other side of the door. In other words, the cat believes there is a mouse there, and acts on that belief.

  20. Entropy: Evolutionarily speaking, our cognitive faculties come from something simpler, and the simpler doesn’t deal with propositions as such. Our cognitive faculties sit on top of more foundational systems, and those systems deal with much more pragmatic shit. The feeling of pain goes directly into avoiding whatever might look potentially painful. No true/false values involved. This might “fool” animals, which is one reason why harmless life forms have evolved that look like the ones causing pain. Then some organisms evolve ways of not being fooled, etc. The point being, that simpler cognitive faculties might be fooled, but evolution leads to better and better “calls” if being fooled means dying of starvation, or if not been fooled allows for a wider food choice. This is the famous arms-race model. No true/false, no mistaking truth for some magical thing sticking to stuff. Still helping understand why, and when, we can trust that our cognitive faculties are making the right choice, such as judging that naturalism and evolution are pretty convincing shit. We didn’t inherit that from our genes, we’re making a judgement call based on experience, and we understand that our cognitive faculties are better off if we take into account more information.

    All that is a helpful step in the right direction. Though I hasten to add that there’s an interesting question here as to under what conditions does the intelligence arms-race get started. Being intelligent isn’t always adaptive.

    I think it has something to do with how complex the niche is. The more affordances that have to be tracked, and the more dynamic those affordances, the greater the need for some model that predicts or anticipates what sensory states are to be expected and that changes the model if the actual states are not as expected.

    But I have no idea how to model the complexity of a niche.

  21. keiths:

    I’d say it’s a matter of representation, not propositions. The cat’s mental representation of the world includes a mouse on the other side of the door. In other words, the cat believes there is a mouse there, and acts on that belief.

    Neil:

    You are anthropomorphizing. But the cat is not a miniature human.

    You actually believe that humans are the only animals that form mental representations of their surroundings?

  22. keiths: You actually believe that humans are the only animals that form mental representations of their surroundings?

    I haven’t suggested that.

    My question was about the cat having a concept of mouse — as distinct from a concept of the thing it is currently looking at.

    We carve up the world rather finely, because we are an eusocial species. But cats have no need to carve up the world that finely.

  23. keiths:

    You brought this up as a problem for Plantinga’s argument. Does he assert that beliefs can only be propositional?

    KN:

    He doesn’t explicitly assert it, but I think he’s committed to it because of how he tries to raise worries about how semantic content can have causal powers.

    But semantic content needn’t be propositional or language-based, and Plantinga seems to acknowledge this. I found the following sentence in Warrant and Proper Function, the book in which Plantinga presents the EAAN for the first time:

    We think our faculties much better adapted to reach the truth in some areas than others; we are good at elementary arithmetic and logic, and the perception of middle-sized objects under ordinary conditions.

    [emphasis added]

    By including perception, he is clearly not limiting his argument to propositional truths.

  24. You made a mistake, Neil. It isn’t a crisis, or something that must be denied at all costs.

  25. Neil,

    My question was about the cat having a concept of mouse — as distinct from a concept of the thing it is currently looking at.

    And as I told you, I’m not assuming that the cat has a concept of “mouse”. The cat simply has a mental representation of her surroundings, in which an animal we would refer to as “a mouse” is represented as being in a particular location, on the other side of what we would refer to as “a door”.

  26. KN,

    I think the right way to attack the EAAN is head-on. I grant that evolution doesn’t “care” about truth, per se; it only cares about what is adaptive.

    However, it isn’t difficult to argue that truth-generating cognitive and perceptual mechanisms are more adaptive than those that consistently fail to generate true beliefs or veridical perceptions.

  27. fifth,

    Perhaps you can do what Neil cannot, and explain how attributing mental representations to a cat, as I do…

    I’d say it’s a matter of representation, not propositions. The cat’s mental representation of the world includes a mouse on the other side of the door. In other words, the cat believes there is a mouse there, and acts on that belief.

    …amounts to “anthropomorphizing” it.

  28. The problem of other minds rears it’s head again.

    I’m always amazed that modern folks are quick to assume that an animal that behaves somewhat teleologicly must think like a human and at the same time so slow to ascribe a mind to other natural phenomena that seem to behave in ways that are not “mechanical”.

    peace

  29. keiths: And as I told you, I’m not assuming that the cat has a concept of “mouse”.

    You gave a belief that you claimed the cat had. That belief depended on the concept “mouse”. If the cat did not have that concept, it cannot have had that belief.

  30. fifth:

    I’m always amazed that modern folks are quick to assume that an animal that behaves somewhat teleologicly must think like a human…

    Straw man. No one here is making that assumption.

    I’m amazed that someone could see the cat in my scenario, eagerly pawing under the door, and deny that the cat has a mental representation of the surroundings and of the mouse’s location.

    (The fact that fifth and Neil are the ones denying it makes it somewhat less amazing.)

  31. keiths: You made a mistake, Neil.

    Keiths, who has appointed himself omniscient, has declared many TSZ members to have made mistakes.

    Those of us who do not accept this omniscience disagree with many of those declarations of error.

  32. fifth:

    …and at the same time so slow to ascribe a mind to other natural phenomena that seem to behave in ways that are not “mechanical”.

    What phenomena are those?

  33. keiths:
    I think the right way to attack the EAAN is head-on.I grant that evolution doesn’t “care” about truth, per se; it only cares about what is adaptive.

    And thereby you fall into Plantinga’s siphistry. The mere idea of a natural phenomenon about “caring” about “truth” or not is a misconception. Natural phenomena cannot assign labels to propositions. Translating what Plantinga means to something philosophically acceptable is not our task, is Plantinga’s, since it’s him who’s producing this philosophical travesty. I’m not in the business of translating absurdities into something sensical. More so if the sophistry is intended to mislead, as in this case. I rather let the idiot drown in his own bullshit than help him out.

    keiths:
    However, it isn’t difficult to argue that truth-generating cognitive and perceptual mechanisms are more adaptive than those that consistently fail to generate true beliefs or veridical perceptions.

    See what I mean? Truth generating cognitive and perceptual mechanisms is an absurdity. True/false are labels for propositions. We use the cognitive and perceptual mechanisms to put that label, not to generate a truth.

    There’s also the tiny little detail of falling for the absolutist mentality foundational for Plantinga’s bullshit (true/false dichotomy, as if everything was that simple). This is a nice trap, and we could actually exploit it to expose it. But more about this later. I am noticing that my thoughts are not organized enough to make this shot right now. Well, that’s also a concequence of the many layers of bullshit involved in Plantinga’s sophistry against evolution and naturalism.

  34. Neil,

    I haven’t claimed omniscience, and if your assumption wasn’t an error, then you should be able to justify it.

    KN and I both see the problem. You claim it isn’t a problem. Justify your assumption, then.

  35. Neil,

    You gave a belief that you claimed the cat had. That belief depended on the concept “mouse”.

    No, it didn’t. I explained this already:

    And as I told you, I’m not assuming that the cat has a concept of “mouse”. The cat simply has a mental representation of her surroundings, in which an animal we would refer to as “a mouse” is represented as being in a particular location, on the other side of what we would refer to as “a door”.

  36. keiths:

    I think the right way to attack the EAAN is head-on. I grant that evolution doesn’t “care” about truth, per se; it only cares about what is adaptive.

    Entropy:

    And thereby you fall into Plantinga’s siphistry. The mere idea of a natural phenomenon about “caring” about “truth” or not is a misconception.

    Did you somehow miss the quote marks around “care”?

    Natural phenomena cannot assign labels to propositions.

    They don’t need to, and Plantinga’s argument does not depend on such labeling.

    Translating what Plantinga means to something philosophically acceptable is not our task, is Plantinga’s, since it’s him who’s producing this philosophical travesty.

    Plantinga’s argument isn’t that difficult to understand. Why not try harder?

    I’m not in the business of translating absurdities into something sensical. More so if the sophistry is intended to mislead, as in this case. I rather let the idiot drown in his own bullshit than help him out.

    Believe me, Plantinga is not relying on Entropy’s help. And while his argument is wrong, I don’t think it’s absurd. It deserves a serious reply.

    Waving your arms and shouting “bullshit!” is not an effective counterargument.

  37. Neil Rickert: Keiths, who has appointed himself omniscient, has declared many TSZ members to have made mistakes.

    I’m not sure but I think it’s one of the secret tenets of keiths new religion that adherents must annoyingly adopt the posture of an opinionated dictatorial schoolmarm.

    😉

    Peace

  38. Neil,

    If we have adequate definitions of “meaning” and “truth”, why isn’t AI already working near perfectly (in the sense of artificial persons)?

    Your error is pretty easy to see. Having an adequate definition of something doesn’t mean we know how to achieve it, much less “near perfectly”.

    Having an adequate definition of world peace doesn’t mean we know how to achieve it.

    Having an adequate definition of cheap hypersonic flight doesn’t mean we can fly cheaply at hypersonic speeds.

    Having an adequate definition of a hole-in-one doesn’t enable a golfer to hit one almost every time they tee off.

    And having adequate definitions of truth and meaning doesn’t enable us to implement near-perfect AI.

  39. keiths: Your error is pretty easy to see. Having an adequate definition of something doesn’t mean we know how to achieve it, much less “near perfectly”.

    I’m going by Richard Hamming’s thesis, that you don’t really understand something if you cannot program it. In my own experience, I’ve found that thesis to work pretty well.

    Sorry, I don’t currently have a citation.

  40. Neil,

    That doesn’t help you. Defining something isn’t the same as understanding how to implement it.

    Look at my examples again.

  41. I would say that being unable to implement something, assuming the physical resources are available, means your definition is insufficient.

  42. petrushka,

    I would say that being unable to implement something, assuming the physical resources are available, means your definition is insufficient.

    It means your definition is insufficient as an implementation guide. But definitions aren’t implementation guides — they’re definitions!

    JFK defined a goal for the US:

    I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to the earth.

    Was that an “adequate definition”? You bet. It stated the goal precisely and unambiguously, and it galvanized a nation. Was it an implementation guide? Not in the slightest.

  43. Neil Rickert: I’m going by Richard Hamming’s thesis, that you don’t really understand something if you cannot program it.

    Interesting. This thesis seems to trade on a reductionist “understanding” of understand.

    I would argue that there are lots of things that aren’t programmable including human behavior and evolution.

    In fact I would say that it’s precisely our inability to “program” a phenomena that inevitably leads us to infer that there is a mind behind it either directly or indirectly.

    If in fact a phenomena is due to mind then inferring mind behind it is the ultimate act of understanding

    This topic is a lot more interesting than whether Atheists exist.

    peace

  44. keiths:
    Did you somehow miss the quote marks around “care”?

    I didn’t. I left the quotes in my answer, I was pointing to the misuse of “truth.”

    keiths:
    They don’t need to, and Plantinga’s argument does not depend on such labeling.

    Plantinga’s argument becomes absurd the moment he’s talking about truth as if it’s some kind of property, rather than the label that it is. His argument depends on this equivocation because it leads to absolutist terms. This is one of the foundations for his “calculations” of “probability” that some belief is “true.” Part of the sophistry.

    keiths:
    Plantinga’s argument isn’t that difficult to understand. Why not try harder?

    Because it’s absurd. Trying harder won’t make it any less absurd. People fall into the trap the moment they translate the absurdities and thus buy into the loaded premises.

    keiths:
    Believe me, Plantinga is not relying on Entropy’s help. And while his argument is wrong, I don’t think it’s absurd. It deserves a serious reply.

    No. He’s relying on yours. Of course the argument is absurd. It’s a compounded fallacy.

    keiths:
    Waving your arms and shouting “bullshit!” is not an effective counterargument.

    Pointing to the absurdity from the very beginning is a counterargument. The whole edifice falls once the loaded language and sophistry are left in the open.

Leave a Reply