Shared Abductive Inference as a proxy Turing Test

This is the first part of a series of posts that are meant to help me think through the relationship between ID and Turing tests. Please be patient I will get to the controversial stuff soon enough but I want to lay some ground work first

Below is a quick refresher video explaining the three forms of inference for those interested.

It’s a given that abductive inference is the most subjective of the three and that is usually seen as a bad thing. I would like to argue that this subjectivity makes shared abductive inference a great proxy Turing test.

In the standard Turing test the examiner asks questions to see if he can distinguish the answers given by an Artificial Intelligence from those offered by a human. If he can’t do that he assumes that the AI is conscious (i.e has a mind).

What the examiner is really trying to get at is if the AI thinks like a human rather than like a computer.

What does it mean to think like me other than to share the same abductive inferences that I do?

Deduction is certainly not a uniquely human activity. Since the conclusion flows inevitably from the premises a simple algorithm could be written to come to a conclusion deductively no conscious thought is necessary. By the same token induction is also moving from premise to conclusion albeit in the other direction and with less certainty. Any computer could do that.

On the other hand abduction is the form of inference that is most human in that there is no logically compelling reason to chose one particular conclusion over another. Wildly different conclusions can be equally valid from a logical standpoint. We must subjectivity decide which conclusion is the best one.

Strangely enough more often than not we humans do come up with the same conclusion when presented with the same information at least for simple arguments.

For instance we see that it’s raining and conclude that it’s cloudy even though it sometimes rains when the sun is shining.

Or we might hear a rustling in the bushes and conclude that there is an animal there even though it could be the wind.

I think that if we encountered a nonhuman entity like an AI that almost never came to the same conclusions that we do in situations like this we would naturally conclude that it was not conscious.

By the same token if we came across an entity that often came to the similar conclusions when using abduction we would conclude there was a mind there behind it all.

Of course since that conclusion itself is based on abductive reasoning we could never be certain that our inference was correct.

What do you think about all this?

In my next post I will share a tangible example to show you how this might work in practice

Peace

PS As always I do apologize for the poor spelling and grammar

221 thoughts on “Shared Abductive Inference as a proxy Turing Test

  1. fifthmonarchyman: PS As always I do apologize for the poor spelling and grammar

    And for failure to cut the post before the video. Please cut it to make it shorter on the main page.

  2. Interesting thesis, fmm. Do you think that if the computer had all and only the same (prior) data and laws as you do, it would surely abduce different causes given the same phenomena? If so, what is the psychological reason for that, do you think?

  3. walto: Do you think that if the computer had all and only the same (prior) data and laws as you do, it would surely abduce different causes given the same phenomena?

    “Surely” is an objective term. When we are speaking of abduction a better term might be “usually” or perhaps “strikingly” or “notably”.

    The point is that I think that a computer would abduce different causes enough to make me doubt it was conscious.

    This would of course be a subjective determination on my part but I think that most folks would agree with me if they had an open mind.

    That is the point

    peace

  4. I think that if we encountered a nonhuman entity like an AI that almost never came to the same conclusions that we do in situations like this we would naturally conclude that it was not conscious.
    By the same token if we came across an entity that often came to the similar conclusions when using abduction we would conclude there was a mind there behind it all.

    What if the entity we encountered presented a selection of probabilities. Probability of sun shining while it’s raining equals number of historical rainstorms when it’s sunny divided by total rainstorms, probability of cloudy 1-that value? Probability of animal in bush X, probability of wind Y (varies depending on current wind condition), probability of something else Z, and so on?

    Would you evaluate the probability of a mind behind this approach to be inverse to the accuracy of the probabilities, or some such?

  5. Flint: What if the entity we encountered presented a selection of probabilities.

    Off the top of my head I would say that we were not dealing with a mind in that case. Mostly because we don’t think that way

    Why do you ask?

    peace

  6. fifthmonarchyman: “Surely” is an objective term. When we are speaking of abduction a better term might be “usually” or perhaps “strikingly” or “notably”.

    The point is that I think that a computer would abduce different causes enough to make me doubt it was conscious.

    This would of course be a subjective determination on my part but I think that most folks would agree with me if they had an open mind.

    That is the point

    peace

    Again, what is the reason for this difference in your opinion? Why couldn’t the computer be programmed to abduce just as we would given the same data?

  7. walto: Do you think that if the computer had all and only the same (prior) data and laws as you do, it would surely abduce different causes given the same phenomena?

    I would say “probably” rather than “surely”.

    Here’s the difference. We are not as limited by prior data. We are better than computers at inventing new ways of getting new data.

    If I were trying to evaluate via a Turing test, then my test would be to attempt to teach the candidate a new concept. That’s usually possible with humans, but not so easy with computers (unless you can rewrite the programming).

  8. walto: Why couldn’t the computer be programmed to abduce just as we would given the same data?

    Because for one thing I don’t think all our thinking is programed. IOW our decisions are not physically determined.

    If a computer could be programed to think just like me then that particular hypothesis would be falsified.

    peace

  9. fifthmonarchyman: Off the top of my head I would say that we were not dealing with a mind in that case. Mostly because we don’t think that way

    Why do you ask?

    peace

    It occurred to me that minds tend to think differently. SOME people actually DO see things in terms of probabilities. Are you trying to say that the only possible mind is a human mind like yours, and that people sufficiently different from you, or perhaps not even of your species, do not have minds?

    I submit that in order to do what you say it does, your god must have a mind so different from human as not to count as a mind at all in your view.

  10. Flint: SOME people actually DO see things in terms of probabilities.

    Do you have a reference for this? I’m not saying that probabilities don’t enter into our decision only that at some point the calculations halt and we decide.

    Flint: Are you trying to say that the only possible mind is a human mind like yours, and that people sufficiently different from you, or perhaps not even of your species, do not have minds?

    I’m saying that when we infer consciousness in an entity we generally mean that it thinks like us (to some extent).

    This is not to say that you have to think like us to have a mind only that there is no way to say something has a conscious mind if it’s thought process is completely alien.

    peace

  11. Neil Rickert: If I were trying to evaluate via a Turing test, then my test would be to attempt to teach the candidate a new concept.

    You could not do that if we were dealing with a signal from a distant planet or a being from the past like Australopithecus afarensis.

    How would you determine if one of these entities was conscious?

    peace

  12. fifthmonarchyman: Because I don’t think all our thinking is programed. IOW our decisions are not physically determined.

    There is some question about whether we know why we make certain decisions , it seems that the act of holding a cup of hot coffee briefly can influence our decisions
    Williams LE, Bargh JA. Experiencing Physical Warmth Promotes Interpersonal Warmth. Science (New York, NY). 2008;322(5901):606-607. doi:10.1126/science.1162548.

  13. Neil Rickert: We are not as limited by prior data. We are better than computers at inventing new ways of getting new data.

    I think that is an important part of it. humans use things like hunches and intuition in our decision making.

    Things that come with minds IMO.

    peace

  14. newton: There is some question about whether we know why we make certain decisions

    I don’t think we will ever know all the whys involved.
    If we could then we could reverse engineer them and plug it all into a computer.

  15. fifthmonarchyman: Do you have a reference for this? I’m not saying that probabilities don’t enter into our decision only that at some point the calculations halt and we decide.

    Sure. Statisticians think that way habitually. They are human.

    I’m saying that when we infer consciousness in an entity we generally mean that it thinks like us (to some extent).

    This seems rather narrow. My cats are most emphatically conscious, and they don’t think like people at all. They make decisions, though, often very good ones (though often not the ones we would make).

    This is not to say that you have to think like us to have a mind only that there is no way to say something has a conscious mind if it’s thought process is completely alien.

    But if it makes decisions that consistently achieve its purposes, even if it goes about them differently, the probability is that it has a conscious mind. Cats certainly do, and so do dogs. But VERY different.

  16. Flint: Sure. Statisticians think that way habitually. They are human.

    I don’t think that we are on the same page.

    Sure a Statistician might want to know the probabilities as much as possible so would everyone whether they express it in those words or not.

    In the end however a human (Statistician or not) will choose based on his best estimate of the known and unknown consequences of the action and maybe his gut.

    On the other hand a computer might be programed to always “choose” the path with the highest probability of success according to the information it has.

    Do you see the difference?

    Flint: This seems rather narrow. My cats are most emphatically conscious, and they don’t think like people at all.

    I’m all ears, on what basis do you conclude that your cat is conscious and your robot vacuum cleaner is not?

    peace

  17. Flint: But if it makes decisions that consistently achieve its purposes, even if it goes about them differently, the probability is that it has a conscious mind. Cats certainly do, and so do dogs. But VERY different.

    What about a robot vacuum cleaner or an ameba?

    peace

  18. Neil Rickert: Here’s the difference. We are not as limited by prior data. We are better than computers at inventing new ways of getting new data.

    fifthmonarchyman: I think that is an important part of it. humans use things like hunches and intuition in our decision making.

    Could be. We are strange beasties.

    Reality MAY exist in distributive form, in the shape not of an all but of a set of eaches, just as it seems to be….There is this is favor of eaches, that they are at any rate real enough to have made themselves at least appear to everyone, whereas the absolute [wholeness, unity, the one] has as yet appeared immediately to only a few mystics, and indeed to them very ambiguously.

    William James–A Pluralistic Universe

  19. fifthmonarchyman: Because for one thing I don’t think all our thinking is programed. IOW our decisions are not physically determined.

    How do our decisions get manifested, then? There must be an interface to the physical at some point. Where’s the evidence for that?

  20. fifthmonarchyman: In the end however a human (Statistician or not) will choose based on his best estimate of the known and unknown consequences of the action and maybe his gut.

    On the other hand a computer might be programed to always “choose” the path with the highest probability of success according to the information it has.

    My guess is computers far exceed humans in the estimates of known consequences, it would seem the difference is in the emotional component of decision making.

  21. Looking at the history of machine intelligence, it seems that computers are gradually meeting every challenge that has been presented. They already exceed the abilities of humans in every game involving pure logic. The have recently become better poker players than humans.

    They are taking over complex control systems, such as power distribution on the grid. They are likely to replace prostitutes in the foreseeable future.

    If you extrapolate the trend, they will eventually meet any challenge you might present, provided it involves some visible or perceivable behavior.

  22. petrushka:
    Looking at the history of machine intelligence, it seems that computers are gradually meeting every challenge that has been presented. They already exceed the abilities of humans in every game involving pure logic. The have recently become better poker players than humans.

    They are taking over complex control systems, such as power distribution on the grid. They are likely to replace prostitutes in the foreseeable future.

    If you extrapolate the trend, they will eventually meet any challenge you might present, provided it involves some visible or perceivable behavior.

    That seems reasonable to me as well. It might be difficult, though to program in some of the weird psychological pecadillos that we don’t fully understand ourselves. FMM seems to be suggesting here that we’d recognize some of the choices a computer might make as being a bit foreign to our own not strictly logical manners of dealing.

    It’s a prediction based on psychology, which, while kind of interesting, has no obviously important entailments, IMO, so I’m curious to hear what fifth thinks would follow from it if it’s true.

  23. walto: FMM seems to be suggesting here that we’d recognize some of the choices a computer might make as being a bit foreign to our own not strictly logical manners of dealing.

    Bullshit. Human already make choices that I consider alien. I’ll ante child sex abuse.

    Behavior cannot be classified as human or non-human.

    I’ll wait for a counterexample.

  24. walto: I thought you believed they WERE! You’re confusing me! X>{

    We have been over this before. Just because I’m a compatibilist does not mean that I think our decisions are physically determined.

    I believe that there is more involved in mind than matter in motion.

    Never the less, If you like we can ignore all of that and just focus on cognition verses nescience.

    There is good reason to believe that cognition is not computable even if physical determinism is true. If that is the case then computers are not capable of the same sort of abductive inferences that we are.

    peace

  25. newton: My guess is computers far exceed humans in the estimates of known consequences, it would seem the difference is in the emotional component of decision making.

    I probably could buy that. The point is that there is a difference

    peace

  26. fifthmonarchyman: We have been over this before. Just because I’m a compatibility does not mean that I think our decisions are physically determined.

    I apologize. I can’t remember the details of your position. So you’re a compatibalist (i.e., one who believes we’re both free and entirely determined, which is ok because free will is compatible with determinism), but you don’t believe the determination is the result (or entirely the result) of physical laws connecting causes and effects and pre-existing physical states?

    (I’m not planning to argue with you about this, just trying to get a sense of your position.)

  27. petrushka: Looking at the history of machine intelligence, it seems that computers are gradually meeting every challenge that has been presented.

    Fine,

    If you like just think of shared abductive inference as another challenge to be met on the road to the Singularity.

    peace

  28. walto: So you’re a compatibalist (i.e., one who believes we’re both free and entirely determined, which is ok because free will is compatible with determinism), but you don’t believe the determination is the result (or entirely the result) of physical laws connecting causes and effects and pre-existing physical states?

    Correct. I think that our decisions are the result of our nature and our environment.

    I just don’t think our nature is a physical thing

    peace

  29. petrushka: Behavior cannot be classified as human or non-human.

    We are not talking about behavior per se but belief, specifically abductive inference

    peace

  30. Thanks.

    Can you briefly say (I don’t want to hijack your thread), how our environment can have determinative effects on our “natures” if only one of the two is physical? Are there strict laws that may be known and consulted connecting the effects of environments on natures?

    Thanks.

  31. walto: Can you briefly say (I don’t want to hijack your thread), how our environment can have determinative effects on our “natures” if only of the two is physical?

    I’m not sure our environment can have an effect on our nature.

    I would say both environment and nature affect our decisions but neither have much affect on the other.

    peace

  32. Patrick: There must be an interface to the physical at some point. Where’s the evidence for that?

    yawn
    OK Fred,
    What sort of evidence would convince you that there was an immaterial component to your decisions?

    peace

  33. fifthmonarchyman: What sort of evidence would convince you that there was an immaterial component to your decisions?

    The way the immaterial and material interact, how does the immaterial decision move the physical.

  34. newton: how does the immaterial decision move the physical.

    How does the physical decision move the physical?

    like that

    peace

  35. walto: So you’re a compatibalist (i.e., one who believes we’re both free and entirely determined, which is ok because free will is compatible with determinism), but you don’t believe the determination is the result (or entirely the result) of physical laws connecting causes and effects and pre-existing physical states?

    I don’t think that’s quite right.

    I take compatibilism to be an account of what it means to have free will. And it seems to me that one can be a compatibilist in that sense, without assuming determinism.

  36. I
    Neil Rickert,

    I guess that’s true, although I think the position is generally unmotivated (also harder to understand) without determinism in the picture. I mean, without determinism, who cares if freedom is compatible with it?

  37. Pedant:
    Any relevance to ID seems remote at this point.

    Given that FMM has already rejected logic (and evidence) as a primary justification for decisions, I suspect he’s building up to a basis that is entirely emotional and objectively imaginary. Which would not surprise me at all. FMM seems to specialize in argument by groundless but intractable assertion.

  38. Flint: Given that FMM has already rejected logic (and evidence) as a primary justification for decisions

    When did I do that exactly?

    peace

  39. fifthmonarchyman: When did I do that exactly?

    peace

    In each of the posts you have made to this thread. You rejected statistical analysis in favor of emotional gut-driven decisions. You dismissed logical reasoning as not being what decisions are based on. And let’s be honest here, throughout your efforts you have rested your arguments on an appeal to a figment of your imagination.

    Do you seriously not realize that people here understand what you’re saying. When you describe thought processes sufficiently different from your religion-addled woo as not being produced by a mind at all, you have made a fascinating claim — that “real” minds MUST share your nonsensical superstitions. So maybe you can understand that from a rational perspective, your own “mind” is seriously compromised at the very least.

  40. I am a statistician, and I occasionally do a demonstration showing that we all think in terms of probability. It starts with the question “Do you think in terms of probability?”, to which the answer is almost always “no”.
    Then I pull a coin out of my pocket and ask individuals to call “heads or tails”. Five or six consecutive Heads and some start getting suspicious. The proto-Bayesians assign a prior probability that the sneaky statistician might not be using a fair coin. 9 or 10 consecutive heads and even the most stubborn are convinced I’m playing a trick.

    Then I point out this was a decision based on probably, and that people have differing thresholds of evidence. I point out my implied null hypothesis of a “fair coin”, and the probabilities of consecutive heads under that hypothesis.

    And they never trust me again. 🙂

  41. Flint: In each of the posts you have made to this thread. You rejected statistical analysis in favor of emotional gut-driven decisions. You dismissed logical reasoning as not being what decisions are based on.

    That is not what is going on at all. I saying that humans do Abductive Inference differently than computers. Different does not equal illogical.

    I have not said that the way humans decide is better only that it’s different.

    Flint: When you describe thought processes sufficiently different from your religion-addled woo as not being produced by a mind at all, you have made a fascinating claim — that “real” minds MUST share your nonsensical superstitions.

    That is again not what I’m saying.
    I’m saying that humans do Abductive Inference differently than machines.

    If you doubt that there are ways we can test it it’s called a Turing test.

    peace

  42. Tomato Addict: Five or six consecutive Heads and some start getting suspicious. The proto-Bayesians assign a prior probability that the sneaky statistician might not be using a fair coin.

    I think some one has been reading ahead 😉

    Would you say that proto-Bayesian is the default understanding of the typical human “bean”?

    peace

    peace

  43. fifthmonarchyman:

    There must be an interface to the physical at some point. Where’s the evidence for that?

    What sort of evidence would convince you that there was an immaterial component to your decisions?

    You appear to be claiming that a) something “immaterial” (whatever that means) exists and b) that “immaterial” something has an effect on physical objects, like brains.

    Given those claims, there must be some mechanism by which the immaterial whatever interacts with physical reality. How does it do that? How do you know? If we investigated your proposed mechanism and found no evidence for it would you accept that your claims are unsupported?

    A related question is why did you come up with those claims in the first place? What observations resulted in those conclusions? After all, rational people don’t just make things up without good reasons.

Leave a Reply