This is the first part of a series of posts that are meant to help me think through the relationship between ID and Turing tests. Please be patient I will get to the controversial stuff soon enough but I want to lay some ground work first
Below is a quick refresher video explaining the three forms of inference for those interested.
It’s a given that abductive inference is the most subjective of the three and that is usually seen as a bad thing. I would like to argue that this subjectivity makes shared abductive inference a great proxy Turing test.
In the standard Turing test the examiner asks questions to see if he can distinguish the answers given by an Artificial Intelligence from those offered by a human. If he can’t do that he assumes that the AI is conscious (i.e has a mind).
What the examiner is really trying to get at is if the AI thinks like a human rather than like a computer.
What does it mean to think like me other than to share the same abductive inferences that I do?
Deduction is certainly not a uniquely human activity. Since the conclusion flows inevitably from the premises a simple algorithm could be written to come to a conclusion deductively no conscious thought is necessary. By the same token induction is also moving from premise to conclusion albeit in the other direction and with less certainty. Any computer could do that.
On the other hand abduction is the form of inference that is most human in that there is no logically compelling reason to chose one particular conclusion over another. Wildly different conclusions can be equally valid from a logical standpoint. We must subjectivity decide which conclusion is the best one.
Strangely enough more often than not we humans do come up with the same conclusion when presented with the same information at least for simple arguments.
For instance we see that it’s raining and conclude that it’s cloudy even though it sometimes rains when the sun is shining.
Or we might hear a rustling in the bushes and conclude that there is an animal there even though it could be the wind.
I think that if we encountered a nonhuman entity like an AI that almost never came to the same conclusions that we do in situations like this we would naturally conclude that it was not conscious.
By the same token if we came across an entity that often came to the similar conclusions when using abduction we would conclude there was a mind there behind it all.
Of course since that conclusion itself is based on abductive reasoning we could never be certain that our inference was correct.
What do you think about all this?
In my next post I will share a tangible example to show you how this might work in practice
PS As always I do apologize for the poor spelling and grammar