The Laws of Thought

aren’t.

They are perfectly valid rules of reasoning, of course.  Wikipedia cites Aristotle: :

  • The law of identity: “that every thing is the same with itself and different from another”: A is A and not ~A.
  • The Law of Non-contradiction: that “one cannot say of something that it is and that it is not in the same respect and at the same time”
  • Law of Excluded Middle: “But on the other hand there cannot be an intermediate between contradictories, but of one subject we must either affirm or deny any one predicate.”

And of course they work just fine for binary, true-or-false, statements, which is why Boolean logic is so powerful.

But I suggest they are not Laws of Thought.

As far as I can see (and I’m neither a philosopher nor a logician)  they don’t work at all well for probabilistic statements:

  • A thing can be both possibly A but also possibly not-A.
  • A thing can be possibly something, and possibly not something, in the same respect and in the same time.
  • And most importantly, a proposition can be possibly true AND its negation can be also possibly true.

Jacob Cohen in his great paper, The Earth is Round (p<.05) writes:

The following syllogism is sensible and also the formally correct modus tollens:

  • If a person is a Martian, then he is not a member of Congress
  • This person is a member of Congress
  • Therefore, he is not a Martian.

Sounds reasonable, no?  This nes syllogism is not sensible because the major premise is wrong, but the reasoning is as before and still a formally correct modus tollens:

  • If a person is an American, then he is not a member of Congress. (WRONG!)
  • This person is a member of Congress
  • Therefore, he is not an American.

If the major premise is made sensible by making it probabilistic, not absolute, the syllogism becomes formally incorrect and leads to a conclusion that is not sensible:

  • If a person is an American, then he is probably not a member of Congress (TRUE, RIGHT?)
  • This person is a member of Congress.
  • Therefore, he is probably not an American.

All this doesn’t mean that the so-called Laws of Thought are false, but that they aren’t Laws of Thought, because, I submit, thinking is fundamentally probabilistic, and probably (heh), specifically, Bayesian.  The so-called Laws of Thought are a special case of human reasoning applicable when we are dealing with certainties.as we can be, if we define our axioms ab initio, and per arguendo. But in day-to-day reasoning, nothing is certain, and we behave like scientists (at best) or like politicians (at worst) making the best fitting models we can to the data available, and conducting on-line error-minimising optimisation processes in coming to provisional (if we are lucky) decisions or least-risky decisions (if we have to act earlier before we have access to all the data we’d ideally like).

And I suspect that this is why Barry et al at Uncommon Descent have such a Thing about the three classical Laws of Thought.  ID, as she is spoke, is not scientific, although sometimes mathematical, and frequently theological.  Which is not to say ID couldn’t be scientific – it could.  But to be so, its practitioners would have to understand the probabilistic nature of scientific reasoning, and the provisional nature of its conclusions, and for all the ID words that have been written about probability, ID proponents do not, in my experience, fundamentally understand what probabilities are probabilities of, and what information is required to calculate them (even though they define probabilities as bits of information).

I suggest they need to get outside the binary “Laws of Thought”, and start to understand probabilistic Thinking.  Which will also lead to a greater understanding of Intelligence, and thus of Design, but does require dropping the notion that any proposition must be either True or False.

74 thoughts on “The Laws of Thought

  1. I think “if…then…else” is pretty hard-wired in.

    After all, neurons sum inputs to produce outputs, and inputs can be both excitatory and inhibitory.

  2. I think “if…then…else” is pretty hard-wired in.

    I don’t even agree with that.

    I think we learn that pragmatically, because it turns out to work much of the time.

  3. Neil Rickert: I don’t even agree with that.

    I think we learn that pragmatically, because it turns out to work much of the time.

    I meant hardwired into the neural architecture. IF a neuron receives sufficient inputs over a short time period such that the depolarisation reaches the threshold for an action potential THEN it fires, and thus sends a depolarising signal to the next downstream neuron.

    At a higher level, if a population of neurons that potentially cause a motor program to be enacted is in a mutually inhibitory relationship with competing population potentially causing an alternative motor program to be enacted, then nothing will happen until one receives an additional excitatory input, causing the other to release its inhibitory hold, and resulting in an output decision to the motor program which is then executed.

  4. I meant hardwired into the neural architecture. IF a neuron receives sufficient inputs over a short time period such that the depolarisation reaches the threshold for an action potential THEN it fires, and thus sends a depolarising signal to the next downstream neuron.

    That seems a stretch. And, even if the neurons are doing “if .. then”, it does not necessarily follow that our thinking would be doing it.

    As I see it, the neurons are categorizing. I do take it that categorizing, as a neural behavior is built-in, though the categories themselves are not.

    Here, categorizing amount to dividing. Divide the world into two parts. Then divide each of those, again into two parts, etc. This hierarchically structure the world, essentially as a binary tree. Then, pragmatically, we learn that “if .. then” (or logic) is a natural and effective way of searching a binary tree structure.

  5. Lizzy, neurons fire anyway, just at a different rate. Inputs affect the probability of a neuron firing in a given time frame.

  6. There’s a slightly different way of approaching the same basic problem here.

    Consider modus ponens and modus tollens. Both of them say, in effect, the same thing: that one should not accept all of ~p, q, and p –> q. But neither says which of those three one ought to reject — only that one should not accept all of them. (This is why there’s a deep truth to the quip, “one person’s modus ponens is another person’s modus tollens!) To decide which propositions one should accept or reject, one has to go beyond what formal reasoning can do for us.

    Also, there’s a subtle but, to my mind, fascinating problem when it comes to characterizing the “ought” or “should” here. For one might be tempted to interpret it as a law or rule. But rules require application, and rules can be applied correctly or incorrectly. So the application of a rule is itself subject to normative assessment. But how? Suppose that normative assessment were itself determined by a rule. Then we have a meta-rule, a rule for determining when a rule has been correctly applied. But how are we to know that the meta-rule has been applied correctly? Do we need a meta-meta-rule? Is there an infinite regress of rules?

    That basic line of argument comes from Wittgenstein’s Philosophical Investigations, and it’s been subjected to a great deal of interpretation and criticism since he came up with it. The response I favor, first developed by Wilfrid Sellars and then substantially elaborated upon by Robert Brandom (Making It Explicit), is that norms are not rules. So although there are norms of rational discourse — how we ought to hold ourselves and each other accountable for our assertions — these norms cannot, on pains of the infinite regress, be rules or laws.

    Connecting the normative story — the story of persons, reasons, norms, commitments, and entitlements — with the neurobiological story — the story of the flow of inhibitions and excitations across neuronal interconnections — is, I think, the right way of understanding what is philosophically deep and interesting about the so-called “mind-body problem”.

  7. “Laws” is an unfortunate, inapt term, here. The Law of Identity is a transcendental, a predicate for the semantics needed to communicate and reason. Without it, we cannot separate something intended or conceived from something NOT intended or conceived.

    That’s not controversial, but what I think is commonly misunderstood is that as a transcendental, the LoI is not normative on extra-mental reality, but rather a prerequisite for thinking about it and talking about it. “Law of Identity” suggests a connection to extra-mental reality that it does not have, an empirical grounding that, say “Newton’s First Law” has.

    The LoI is not like that.

    As a transcendental, there is an anthropic element to it; For us to be aware of anything at all, we must distinguish, and thus divide our conceptual space into ‘this-but-not-that” compartments. Which means that, transcendentally, the LoI must obtain if we are at a point to consider it, even. We could not consider it if reality was not amenable to distinctions.

    The LNC is also a transcendental for semantics, and thus communication and reasoning. LNC serves as a better example of the hazards of using the term “law”, though. LNC may be (and is) necessary for us to reason by exclusion (“this” entails “not that” for any given context), but LNC is not magic; it has no magical superpowers to metaphysically structure our reality so that any and all aspects of our extra-mental world conform to it.

    We adopt the LNC as axiomatic for many forms of reasoning. Axioms are adopted because they are necessary, though, and this does not mean they are actual, or more precisely, that all of the world around us can be comprehended in ways that are non-problematic with respect to the LNC.

    For the most part, the LNC is so effective, we forget it’s underwriting nearly all of our semantics and reasoning. But some aspects of our observed reality don’t gibe neatly with the LNC, or so it appears currently. The LNC is not a “god” or a cosmic truth, it’s a tool for reasoning, and while it acquits itself nicely nearly everywhere we use it, if reality is such that, say, quantum superposition confounds it, reality is not somehow “cosmically obligated” to conform to our use of the LNC tool.

    This is a pervasive theistic/intuitionist conceit.

    If we are talking about the LNC (or the LoI or the LEM) as “true” or “false”, I think we are off the rails. “True” and “false” are not coherent attributes of the LNC, anymore than an axiom is “true truth”. An axiom is assumed to be true, because it is necessary for some purpose. LNC and LoI are *axioms* of reasoning, the building blocks of (some forms) of reasoning, but are not and cannot be “true” or “false” in the “corresponds to extra-mental state of affairs” sense. They can only be “useful” for reasoning or not, just as a (contemplated) axiom is either necessary for some use or not.

  8. All this doesn’t mean that the so-called Laws of Thought are false, but that they aren’t Laws of Thought, because, I submit, thinking is fundamentally probabilistic, and probably (heh), specifically, Bayesian.

    Bayesian philosophy, Bayesian epistemology. These, and other variants, seem to be quite the fad these days.

    It is nonsense. It is absurd. Thought isn’t anything like that.

    Among other problems with the idea, it turns out that people are terrible at probabilistic reasoning. Check out some of the research of Kahneman, Tversky and others.

  9. One of the reasons I deny that there are laws of thought, is that thought is used far more widely than in argumentation. Consider a fiction author thinking about the plot for the next novel, or a portrait photographer, thinking about how best to pose his subject.

    As I see it, thought is mostly simulate and evaluate. And it is that ability to evaluate that is key. Roughly speaking, we are able to perceive our own thoughts and evaluate them using the methods of evaluation that we have developed and refined in our real world experience.

  10. The LoI is not like that.

    Throughout your comment, I couldn’t help reading “LoI” as “LoL”.

    Teh lol that can be laffed is not teh eternal LoL. Embracing teh LoL, you become embraced. Teh furst rule of teh LoL is “You do not talk about teh LoL.” kthxbai

  11. petrushka:
    Lizzy, neurons fire anyway, just at a different rate. Inputs affect the probability of a neuron firing in a given time frame.

    Yes, that was my point – brains are logical but it’s fuzzy logic not classical logic.

  12. Neil Rickert: Bayesian philosophy, Bayesian epistemology.These, and other variants, seem to be quite the fad these days.

    It is nonsense.It is absurd.Thought isn’t anything like that.

    Among other problems with the idea, it turns out that people are terrible at probabilistic reasoning.Check out some of the research of Kahneman, Tversky and others.

    I agree entirely when we are talking about the meta level of actually explicitly estimating probabilities.

    But estimating probabilities implicitly is fundamental to the way we navigate the world and learn – we make probabilistic forward models of what will happen at some time in the very near future and update those models based on the degree to which they were in error.

  13. Neil Rickert:
    One of the reasons I deny that there are laws of thought, is that thought is used far more widely than in argumentation.Consider a fiction author thinking about the plot for the next novel, or a portrait photographer, thinking about how best to pose his subject.

    As I see it, thought is mostly simulate and evaluate.And it is that ability to evaluate that is key.Roughly speaking, we are able to perceive our own thoughts and evaluate them using the methods of evaluation that we have developed and refined in our real world experience.

    Yes I would agree.

    However, I would also say that one of our most important evaluation tools is our ability to infer causality – to figure out that if A, then B.

    AndI see the roots of classical logic as lying in practical prediction, or “forward modelling”, abstracted, reified and binarised.

  14. But estimating probabilities implicitly is fundamental to the way we navigate the world and learn – we make probabilistic forward models of what will happen at some time in the very near future and update those models based on the degree to which they were in error.

    I strongly doubt that.

  15. Classical logic is a cultural invention, too recent to have evolved biologically. The ability to engage in reasoning., like the ability to write music or poetry or do marh in one’s head seems to be sparsely distributed as well as rarely supported by formal education.

    In this regard it’s a bit like scientific reasoning, which I regard as an extension of classical reasoning.

    Scientific reasoning ewtends the ability to find causes in much the same way that a telescope extends our vision.

  16. Firstly, my sense as a mathematician is that Bayesian methods are too weak to account for most knowledge. They converge only slowly, and they lead to wishy washy conclusions. You could never get something like Newton’s laws with Bayesian inference.

    Secondly, the idea that the neurons are actually using mathematical equations, such as Bayes’ rule, seems exceedingly implausible.

  17. It’s important to keep a clear eye on the difference between how we actually think and how we ought to think, which is a difference in descriptive and prescriptive vocabularies, and it’s also important to keep a clear eye on the difference between how we think and how neurons interact, which is a difference in levels of description. Conflating either or both of those distinctions makes a hopeless muddle, I think.

  18. eigenstate,

    Yes, I share this view of the “laws” as pragmatic transcendentals. They are linguistic (better, metalinguistic) explications of the norms of rational discourse — the “laws” just allow us to say what one must be doing in order to count as a rational agent (by our lights, of course — who else’s?) And the norms cannot themselves be laws or rules, lest we invoke the infinite regress problem. So it’s not that the norms of rational discourse are “grounded” in laws that emanate from some non-human source, but rather that the laws themselves are grounded in norms of rational discourse that are also, and at the same time, human, all-too-human.

    KN

  19. Neil Rickert:
    Firstly, my sense as a mathematician is that Bayesian methods are too weak to account for most knowledge.They converge only slowly, and they lead to wishy washy conclusions.You could never get something like Newton’s laws with Bayesian inference.

    Secondly, the idea that the neurons are actually using mathematical equations, such as Bayes’ rule, seems exceedingly implausible.

    Ah, OK, I am probably playing fast and loose with the math.

    What I am getting at is that there is lots of evidence (I would say) to support the hypothesis that perception involves forward-model making – that neural populations start to behave as they would do were a given motor program to be initiated, in advance of that initiation (which may, in the event, not be executed). This is particularly apparent in the visual system, and there is evidence, for instance, that neurons whose receptive fields are normally tuned to a specific location in retinotopic space shift their receptive field just in advance of a saccadic eye movement to the location in retinotopic space that will be brought into their usual receptive fields by the planned saccade, even when the planned saccade does not occur. Which is thought to be why we do not see the world lurch when we move our eyes (which we do several times per second).

    But what is also interesting is that if there is ambiguity about the destination of the saccade, for instance two competing visual stimuli that are moderately close together, the system seems to adopt an error-minimising approach, landing half-way between the two, rather than plumping for one or the other, landing closest to the one for which its “priors” as it were are the greatest, and thus to minimise, over all, the amount of correction required, at the cost of greater correction than if the system had simply “guessed” correctly. For competing visual stimuli with more orthogonal programs (e.g. a saccade right vs a saccade left), however output is binary – saccades are more accurate unless completely wrong, but delayed.

    With more complex and less automated motor decisions, the general idea seems to be that when faced with conflict between two potential responses, the forward modelling loops back on itself, feeding back the probability of a given outcome (in the form of firing rates) into the neural cascades that potentially output each of the two potential responses, each cascade inhibiting the other with a strength proportional to its own, until one gets ahead, suppressing the other which in turn then releases its inhibitory hold, and races to execution. A bit like arm-wrestling.

    So essentially, it seemed to me, that what is going on is the summing of probabilities computed from continuously generated forward models.

    Maybe that’s not Bayesian 🙂 But it is forward modelling.

  20. Lizzie: What I am getting at is that there is lots of evidence (I would say) to support the hypothesis that perception involves forward-model making – that neural populations start to behave as they would do were a given motor program to be initiated, in advance of that initiation (which may, in the event, not be executed). This is particularly apparent in the visual system, and there is evidence, for instance, that neurons whose receptive fields are normally tuned to a specific location in retinotopic space shift their receptive field just in advance of a saccadic eye movement to the location in retinotopic space that will be brought into their usual receptive fields by the planned saccade, even when the planned saccade does not occur. Which is thought to be why we do not see the world lurch when we move our eyes (which we do several times per second).

    Well, that highlights where we disagree. We have fundamentally different understandings of perception, what it is and how it works.

    In short, you are a representationalist, while I am a direct perceptionist.

    I don’t see how representationalism could possibly work. Ok, to be fair, representationalists don’t see how direct perception could possibly work.

    Your last sentence: “Which is thought to be why we do not see the world lurch when we move our eyes (which we do several times per second).”

    I see that as “explaining” something that does not require explanation. It only seems to require explanation, because of the assumptions made by representationalists.

    When you are at the supermarket, a bar code scanner is used to read what you have purchased. That scanner uses a laser beam, which moves in saccades. Okay, they don’t use the term “saccades”, but it is doing the equivalent. Does the bar code scanner have to use forward modeling so that it does not see your grocery items lurch? Well, no. The saccades of the bar code scanner are not a problem. They are intended, they are part of how it works. My suggestion is that the saccades of the eye are not a problem; they do not require forward modeling; they are part of how vision works. And if representationalism has difficulty dealing with saccades, that’s because it is on a wrong track.

  21. Bayesian seems to be the default term whenever fuzzy logic is implied.

    My own impression is that brains mediate action, and every feature of brains has evolved in support of action. The ability to postpone and ponder courses of action is rather recent in brains, but appears to be a feature of birds and mammals.

    I still cringe at the term “choice.” Actions are not binary. They flow and evolve in process.

  22. I think the question to ask when studying any brain or nervous system function is to ask how it supports action. Fight, flight, food acquisition, mating.

    I have rather poor vision. I can no longer be corrected better than 20/40 in my good ye, and 20/100 in the other.

    But I can spot the movement of a small insect of lizard in my yard 50 feet away and correctly identify it. My eyes are not just cameras, even though they have some things in common with digital cameras. My eyes are in support of survival and have special tunings for things that are relevant to survival.

  23. Neil Rickert: Well, that highlights where we disagree.We have fundamentally different understandings of perception, what it is and how it works.

    In short, you are a representationalist, while I am a direct perceptionist.

    I never know which ist I am! OK 🙂

    I don’t see how representationalism could possibly work.Ok, to be fair, representationalists don’t see how direct perception could possibly work.

    Could you explain what representationalism is?

    Your last sentence: “Which is thought to be why we do not see the world lurch when we move our eyes (which we do several times per second).”

    I see that as “explaining” something that does not require explanation.It only seems to require explanation, because of the assumptions made by representationalists.When you are at the supermarket, a bar code scanner is used to read what you have purchased.That scanner uses a laser beam, which moves in saccades.Okay, they don’t use the term “saccades”, but it is doing the equivalent.Does the bar code scanner have to use forward modeling so that it does not see your grocery items lurch?Well, no.The saccades of the bar code scanner are not a problem.They are intended, they are part of how it works.My suggestion is that the saccades of the eye are not a problem; they do not require forward modeling; they are part of how vision works.

    They are certainly part of how vision works. But I don’t see the analogy with the bar code scanner. The bar code scanner doesn’t see the grocery at all, and it certainly doesn’t try to translate it from retinotopic coordinates to world coordinates, because it doesn’t have to.

    And if representationalism has difficulty dealing with saccades, that’s because it is on a wrong track.

    I’m not yet convinced that I’m a representationalist (it doesn’t seem likely). It seems more likely that I’m being unclear.

    *googles*

    No, I’m not 🙂

    So I’m either being unclear to someone, whether to you or to myself, remains to be seen.

  24. petrushka: My eyes are not just cameras, even though they have some things in common with digital cameras. My eyes are in support of survival and have special tunings for things that are relevant to survival.

    Eyes are cameras in the simplest sense – they have an aperture, and a lens, and the visual scene is physically projected on to the retina.

    But they are not cameras-with-film. They don’t take a picture that is then “re-presented” to the homunculus as evidence as to what is outside. I’m not sure why, but I seem to have conveyed the impression that that is what I thought.

    I don’t. Not nohow.

  25. petrushka:
    Bayesian seems to be the default term whenever fuzzy logic is implied.

    That is unfortunate. I will try to avoid the term unless I am being very specific. But I have certainly built specifically Bayesian models of perception-and-action that seem to work.

    My own impression is that brains mediate action, and every feature of brains has evolved in support of action. The ability to postpone and ponder courses of action is rather recent in brains, but appears to be a feature of birds and mammals.

    I absolutely agree. As I’ve said more than once, I think it is no coincidence that brains evolved in things that move, and didn’t in things with roots.

    I still cringe at the term “choice.” Actions are not binary. They flow and evolve in process.

    I don’t cringe, because it’s a word that can refer to more than binary choices – but it does tend to imply discrete choices. I agree that actions flow, but I don’t think that means that we have to invoke something very different to “choice” – if I reach for a chocolate in a box, my hand will “flow” over the box as it is buffetted by the competition between goals in my head as I forward model the various flavours and evaluate them against my modelled pleasure – just because it’s a continuously looping process doesn’t mean it isn’t “choosing” – and the outcome of many actions however flowing is often still discrete. I settle on a final chocolate. I dial a specific number. I turn right rather than left. I use a forehand rather than a backhand.

    And we can even investigate the process by which an planned action moves from being revocable to being irrevocable. For instance:

    Looking before you leap: a theory of motivated control of action

    ETA: also

    Recalibrating Time: When Did I Do that?

    (Not an attempt to argue-from-authority, just to supply background and evidence of at least minimal peer-review :))

  26. Lizzie: Could you explain what representationalism is?

    In brief, the representationalist holds that the eye is a kind of digital camera, producing a pixel map on the retina. Thus the retinal image is a representation of the environment, and perception works by analyzing that representation.

    My way of looking at it, is that the eye scans the field in saccades. As it does this, the light reaching a particular retinal cell scans across the landscape. When that crosses a sharp boundary, there is a sharp transition in received signal. These transitions can be detected far more reliably and precisely than just luminosity. The idea is that the perceptual system is actively seeking information by scanning with saccades, rather than passively receiving it. The information used for vision would then include the timing of when, during a saccade, that boundary was crossed.

    A boundary thus found gives a reliable recognition point, so that data from other parts of the visual field can now be identified with their distance (rotational distance within a saccade) from that boundary.

    Representationalist: we are passively receiving data, so let’s analyze it.

    Direct perceptionist: there ain’t no passive data, so let’s go get data in as reliable a method as possible. And the method that we use to get the data becomes part of the information that perception can use.

  27. Neil Rickert: In brief, the representationalist holds that the eye is a kind of digital camera, producing a pixel map on the retina.Thus the retinal image is a representation of the environment, and perception works by analyzing that representation.

    Thanks. Well, that is not my view. Phew!

    My way of looking at it, is that the eye scans the field in saccades.As it does this, the light reaching a particular retinal cell scans across the landscape.When that crosses a sharp boundary, there is a sharp transition in received signal.These transitions can be detected far more reliably and precisely than just luminosity.The idea is that the perceptual system is actively seeking information by scanning with saccades, rather than passively receiving it.The information used for vision would then include the timing of when, during a saccade, that boundary was crossed.

    Ah. I think the difference between us is technical, not philosophical. There’s not much evidence that we see anything during a saccadic eye movement, unlike “smooth pursuit” eye movements. Saccades are extremely rapid, and the conventional view (possibly not entirely correct, but with a lot of evidence to support it) is that during a saccadic eye movement the visual system is highly suppressed (called saccadic masking, or saccadic suppression).

    A boundary thus found gives a reliable recognition point, so that data from other parts of the visual field can now be identified with their distance (rotational distance within a saccade) from that boundary.

    Well, I agree that boundaries are important, but I’m not aware of evidence that they are detected during saccades, although certainly detected via multiple fixations (not least because only the foveal region is tuned for high spatial frequencies, peripheral vision being heavily high pass filtered).

    Representationalist:we are passively receiving data, so let’s analyze it.

    Direct perceptionist:there ain’t no passive data, so let’s go get data in as reliable a method as possible.And the method that we use to get the data becomes part of the information that perception can use.

    In that case, I can confidently assert that I am a Direct perceptionist 🙂

    And indeed my position is the current consensus position that vision is an active process, which is one of the reasons that I think perception and awareness are best seen in terms of preparation for action, even if the action is as small as a saccade.

  28. I think the most productive way to look at how brains work is to look at the simplest reflexive circuits and try to understand how layers of complexity evolved. Each layer above the simple reflex enables degrees of freedom.

    Free will is not something that magically appears at the human level. It appears in the first neural circuits that enable behavior more complex than immediate reflex.

    On the other topic:

    The problem I have with the words choice and decision is that they imply the possibilities are out there, like fruit to be picked or items to be purchased, or buttons to be pushed.

    I think novel behavior and invention is far more interesting and far more germane to the problem of free will.

    Invention is the activity that IDists deny can be done by evolution. Invention is the human activity that heists deny is possible without a disembodied mind. If invention can be done by machine, so to speak, by a completely deterministic construction, then that would undercut the claim that there is something special about free will.

    To me, that’s the near term goal of AI.

    I do not care if robots ever pass the Turing est or pass themselves as human. I am interested in whether they can invent.

  29. petrushka: I do not care if robots ever pass the Turing est or pass themselves as human. I am interested in whether they can invent.

    Oh, me too. But then genetic algorithms seem pretty inventive to me, already, ironically.

    So far from evolution being unable to invent, it seems the best way of doing it. And, to use another possibly over-used term, “Neural Darwinism” still seems a good way of describing how the brain works – with excitatory feedback representing reproduction and inhibitory feedback representing selective culling.

  30. I think we’ve been on the same page for a long time.

    I just think that invention is a much more obvious and testable way of discussing free will.

    If you “choose” between outer defined options, there is no way to tell what is going on inside the box. You can argue forever.

    If you can invent, I don’t think anyone can meaningfully say you are not “free.”

    So the ability to invent is my operational definition of free will. I think evolution demonstrates it.

  31. I think its likely indeed that these laws are JUST a special case in thought.
    They work only upon a common agreement.
    Logical thought only works when there are conclusions about truth.
    in christianity or nature conclusions are difficult to discover. so logic fails .
    I say its illogical to say evolutionary biology can claim biological evidence from geological presumptions . The biology ‘fact’ only is a fact , only, if the geology is a fact.
    Without the geology fact their is NOT a biological fact.
    So I insist and complain evolution iis not a biological theory bit only a hypothesis to date.
    Fossils, genetics, morphology etc are not facts about biology at all.
    So a great failure in the logic.
    Evolutionists insist geology makes biological data points a fact of biological descent and process.
    Somebody is breaking some law here!

  32. petrushka:
    I think we’ve been on the same page for a long time.

    I just think that invention is a much more obvious and testable way of discussing free will.

    If you “choose” between outer defined options, there is no way to tell what is going on inside the box. You can argue forever.

    If you can invent, I don’t think anyone can meaningfully say you are not “free.”

    So the ability to invent is my operational definition of free will. I think evolution demonstrates it.

    Speaking from the peanut gallery, I wanted to point out that reading the above informed disagreements among honest and knowledgeable folks is far more entertaining and informative than the anti-science apologists that so often excrete here.

    Secondly, while I really like the idea that “inventing” is a necessary component of Free Will (and for those who hate that term, I sympathize), I would still say it’s not enough. Biological evolution does invent new life forms, and there are degrees of freedom in the process, but I would hardly include the term “will” for what it does. I also think Lizzie’s Forward Modeling is a necessary ingredient — you not only have to invent and choose, but you also have to have some awareness of what you have invented and make a decision on what to do with it — even if that decision is wrong.

    Free Will means being able to say “I changed my mind”.

  33. Forward modeling, or being “aware” of potential consequences seems like a reasonable way of demarcating the boundary between those with will and those without.

    But I think it’s a fuzzy line.

    I would judge the existence of awareness by the complexity of behavior and the complexity of learning.

  34. Neil,

    In brief, the representationalist holds that the eye is a kind of digital camera, producing a pixel map on the retina. Thus the retinal image is a representation of the environment, and perception works by analyzing that representation.

    That’s an unfair representation of representationalism.

    I am a representationalist, but I certainly do not hold that the retinal “pixel map” is the representation.

    The representation of the visual world is 3D, not 2D; it’s dynamic, not static; and it gets updated over time in response to the information flowing in from the eyes, rather than being merely a succession of passive analyses of static retinal images.

    Optical illusions are strong evidence in favor of representationalism over direct perception. A representationalist can understand why we can see motion in a static image, and why we see contours that aren’t really there.

    How would a direct perceptionist explain these? How can we directly perceive something that isn’t there in the first place?

  35. petrushka:

    But I think it’s a fuzzy line.

    Indeed. I still wonder sometimes whether we can even consider it a universal trait among humans.

  36. The representation of the visual world is 3D, not 2D; it’s dynamic, not static; and it gets updated over time in response to the information flowing in from the eyes, rather than being merely a succession of passive analyses of static retinal images.

    Yes, you are right. Many representationalists do hold that there is an internal 3D image somewhere. Thanks for filling in the details.

    Optical illusions are strong evidence in favor of representationalism over direct perception.

    Not really.

    The direct perceptionist takes perception to be the acquiring of information, not the production of sensations. If the information acquired is ambiguous or erroneous, optical illusions can result.

  37. Neil,

    The direct perceptionist takes perception to be the acquiring of information, not the production of sensations. If the information acquired is ambiguous or erroneous, optical illusions can result.

    But in the case of the two illusions I linked to, the information presented to the retina is neither ambiguous nor erroneous.

    Measure the brightness on either side of the illusory contour, and you’ll find that it is exactly the same. The contour is unambiguously absent from the stimulus. Print the colored pattern on a piece of paper, so that it can’t possibly move, and your brain will still perceive it as moving.

    The error is in the representation, not the stimulus.

    Perception is indirect.

  38. davehooke:
    Geology is a fact. Or rather, lots of facts.

    Whether conclusions in origin geology are right or wrong is unrelated to biological facts . All there is IS biological data points. Then connections that only work if geology is invoked.
    Logical flaw here I think.

  39. Robert Byers: Whether conclusions in origin geology are right or wrong is unrelated to biological facts . All there is IS biological data points. Then connections that only work if geology is invoked.
    Logical flaw here I think.

    Well, you are a Young Earth Creationist, which is perhaps the silliest stance on the age of the earth it is possible to take.

    So you cannot think any other way, which is quite sad. If you want to spend your life in denial, that’s your call

  40. keiths:
    Neil,

    That’s an unfair representation of representationalism.

    I am a representationalist, but I certainly do not hold that the retinal “pixel map” is the representation.

    The representation of the visual world is 3D, not 2D; it’s dynamic, not static; and it gets updated over time in response to the information flowing in from the eyes, rather than being merely a succession of passive analyses of static retinal images.

    Optical illusions are strong evidence in favor of representationalism over direct perception.A representationalist can understand why we can see motion in a static image, and why we see contours that aren’t really there.

    How would a direct perceptionist explain these?How can we directly perceive something that isn’t there in the first place?

    Oh, golly these isms confuse me.

    I’d say we see contours that “aren’t there” because we make models based on the evidence we have. In fact I wouldn’t even say “they aren’t there” – I don’t think any contours “are there” contours themselves are models.

    Is a model a “representation”? We can certain represent a model to another person, and indeed to ourselves. Does that make me a representationalist? I don’t know.

    I don’t think there’s an inner screening room – a “cartesian theatre” in which a homunculus sits and views the assembled representation/model. I think we act based “directly” on the model.

  41. Robert Byers: Whether conclusions in origin geology are right or wrong is unrelated to biological facts . All there is IS biological data points. Then connections that only work if geology is invoked.

    Logical flaw here I think.

    Robert, there are gazillion data points: geological, biological, astronomical, cosmological, subatomicological, you name it.

    What gives us so much confidence about the age of the earth is that these data points collectively give us converging and consilient evidence. To insist that the earth is only a few thousand years old in the teeth of such evidence is to put faith over evidence. Which is your prerogative. But there is no “logical flaw”.

    Nor, to bring us back on track, there a “logical flaw” in inferring from the overwhelming evidence that minds and brains are intimately connected that they way we think is a function of the way our brains work. The brain is not simply a “memory store” for the use of the mind. Indeed I’d question the concept of the brain being a “memory store” at all.

  42. No logical flaw. Just you insisting that geology must be unrelated to biology. You do not understand consilience. Consilience. Your word for today.

    You do not understand logic either.

    As it happens, there is plenty of evidence for evolution that does not directly rely on geology.

  43. Lizzie,

    I’d say we see contours that “aren’t there” because we make models based on the evidence we have.

    Yes, and in the models we make, we extend and connect contours that are not extended and connected in the actual retinal image. According to direct perceptionists, this shouldn’t happen.

    It’s fairly easy to see why our visual systems perform this “extend and connect” trick. It’s a valuable strategy in a world in which more nearer objects often partially occlude more distant ones. Rather than assuming that disconnected contours belong to separate objects, our visual systems make a leap and connect them, in effect assuming that they belong to a single object. It doesn’t always work, but it works far more often than the opposite strategy.

    In fact I wouldn’t even say “they aren’t there” – I don’t think any contours “are there” contours themselves are models.

    I suppose it’s a semantic issue. I consider the brightness discontinuities in the corners of the Kanisza triangle to be “contours”. Those contours are “really there”, because the brightness discontinuities are “really there”. The perceived contours at the midpoints of the sides, on the other hand, are illusory — constructed solely by the perceptual apparatus. There are no brightness discontinuities there.

    Is a model a “representation”?

    Yes, though in the context of the direct perception vs. representationalism debate, we are talking specifically about models created and used by the perceptual system itself, not higher-level cognitive models of the world.

    I don’t think there’s an inner screening room – a “cartesian theatre” in which a homunculus sits and views the assembled representation/model. I think we act based “directly” on the model.

    I agree. Direct perceptionists would argue that there is no model.

  44. davehooke:
    No logical flaw. Just you insisting that geology must be unrelated to biology. You do not understand consilience. Consilience. Your word for today.

    This is not about consilience. Thats different.
    This is about a logical flaw of believing one is presenting biological evidence for descent and process conclusions when its in fact uts just connecting biological data points and this ENTIRELY determined on geological presumptions.
    Yet insisting its biological evidence for its biological conclusions.
    I think i’m right here.
    anyone good with logic in the building!?

    You do not understand logic either.

    As it happens, there is plenty of evidence for evolution that does not directly rely on geology.

Leave a Reply