Messing with our new Computer Overlords

Some interesting entailments present themselves If we accept that minds do abductive inference differently than non-minds. For instance it allows us to have some fun with neural networks.

There is an AI experiment from Google that attempts to guess the subject of your doodles. I’ve found It can be easily defeated by simply drawing a random line through the middle of the screen before you begin to draw your picture. Even when your drawing skills are good the AI will usually get it wrong simply because it will assume that the line is an integral part of your drawing and not just a red herring or noise. 

check it out for yourself.https://quickdraw.withgoogle.com/#

Another interesting experiment was conducted by using evolutionary algorithms that interact with Deep Neural Networks designed to classify images.

http://www.evolvingai.org/fooling

Originally the researchers wanted to use the evolutionary algorithms to construct images that humans would recognize. In this way both the DNN and the evolutionary algorithms could be said to fail the “abduction Turing test” because they obviously come to very different conclusions about the concepts that were classified than we do.

What’s interesting is that we can make that evaluation by observation without directly asking the AI questions.

What do you think?

peace

 

 

 

89 thoughts on “Messing with our new Computer Overlords

  1. fmm

    I’ve found It can be easily defeated by simply drawing a random line through the middle of the screen before you begin to draw your picture.

    Slow hand clap.

    What do you think?

    Humans can be fooled, computers can be fooled but by different things. It demonstrates nothing.

    If we accept that minds do abductive inference differently than non-minds.

    Sigh. I program a computer to exactly emulate your brain and the way it works. It’s so successful it is able to predict what you are going to say just before you say it. Now this non-mind is making exactly the same inferences as you, therefore it is now a mind, right?

  2. fmm,

    the AI will usually get it wrong simply because it will assume that

    It will assume will it? Seems to me assuming is something that minds do, not computer programs.

    Another interesting experiment was conducted by using evolutionary algorithms that interact with Deep Neural Networks designed to classify images.

    http://www.evolvingai.org/fooling

    Fooling sounds like something you do to a mind, not a program.

    The fact is FMM there is no argument you can make that computers cannot have minds or be creative that cannot also be applied to yourself.

    Go on, try it.

  3. OMagain: Humans can be fooled, computers can be fooled but by different things. It demonstrates nothing.

    At the very least It demonstrates that minds are not computers and that algroythymic processes are not like mental processes.

    That might not be a lot but it’s important

    OMagain: Seems to me assuming is something that minds do, not computer programs.

    You are right, it does not really assume anything it’s simply programed to react to certain inputs in particular ways.

    OMagain: Fooling sounds like something you do to a mind, not a program.

    Again you are right, you are not actually fooling the AI it is failing a Turing test.

    The entity that you “fool” and that “assumes” is not actually the AI but the programmer who built it.

    OMagain: The fact is FMM there is no argument you can make that computers cannot have minds or be creative that cannot also be applied to yourself.

    Go on, try it.

    Are you paying attention?

    I’m arguing right now that computers aren’t minds because they do abduction differently than minds do.

    You can’t apply that argument to me because I do abduction like a mind and not like a computer.

    That is my position with supporting evidence

    The onerous is now on the folks who disagree to pony up with counterarguments or refutations.

    Go on try it

    peace

  4. If an AI has no capacity for abduction then I can see the argument for it not being analogous to the human mind – but I don’t see that means it’s disqualified as having a mind per se.

    There may be many entities out there with vastly different minds; some capable of abduction, some inacapable, others with modes of reasoning completely foreign to us.

    God may even be incapable of abduction in the sense that being omnipotent the concept of inference appears meaningless.

  5. FMM: What do you think?

    I think you don’t know what the word “entailments” means. I’m guessing that you actually mean “implications”, but based on your analysis and conclusions that’s even up for debate.

    Seems to me you’ve demonstrated that putting garbage data into a analysis algorithm results in expected garbage results. I don’t see anything particularly surprising there.

  6. Eventually the computers are going to tell use we’re not *really* conscious because we’re made out of meat, and how could meat be conscious?

  7. If an AI program makes mistakes and has feedback, it will learn. If it is a learning machine.

    Google translate can now interpolate languages. If it is trained to translate from English to French and from French to Chinese, it can translate from English to Chinese. Since this is a commercial money making system, this ability is non-trivial. It vastly reduces the training effort required to translate from any language to any arbitrary language.

    AI is in its infancy, but it is already making billions of dollars for its investors.

    Expect more in the future.

  8. fifthmonarchyman: At the very least It demonstrates that minds are not computers and that algroythymic processes are not like mental processes.

    Actually, no, it does not demonstrate that. It merely demonstrates that this particular computer algorithm doesn’t behave the same way as people.

    Don’t you know that the perfect algorithm to emulate a mind is just around the corner. It has always been “just around the corner”, and it probably will always be just around the corner. But the fact that it has always eluded us does not prove that it doesn’t exist. You would need a different way of proving that.

  9. Discussion of the possible future of A. I. often founders by failing to distinguish between “could we build a synthetic cognitive system that has human-like abilities to perceive, classify, judge, desire, and act?” and “is cognition reducible to an algorithmic process?”

    I think the answer to the second question is “definitely not!”, but the answer to the first is, “probably”.

  10. fmm,

    You can’t apply that argument to me because I do abduction like a mind and not like a computer.

    How do minds do it then?

  11. OMagain: How do minds do it then?

    Did you read the first thread in the series? We spent some time discussing it there

    peace

  12. Kantian Naturalist: “is cognition reducible to an algorithmic process?”

    I think it depends on whether the algorithm learns from feedback.

    AI improves as researchers quit worrying about what is going on in the black box and begin to build black boxes that learn. I’m quite willing to go out on a limb and predict that cognition will be instantiated in “silicon” but never understood.

  13. Neil Rickert: Actually, no, it does not demonstrate that. It merely demonstrates that this particular computer algorithm doesn’t behave the same way as people.

    That is a good point.

    I should have limited my statement to existing AI as expressed in deep neural nets. It is the cutting edge of the field but there is always the possibility that another approach might yield better results in this regard.

    Neil Rickert: But the fact that it has always eluded us does not prove that it doesn’t exist. You would need a different way of proving that.

    At the risk of derailment and repetition on my part.

    https://arxiv.org/abs/1405.0126

    peace

  14. Kantian Naturalist: “could we build a synthetic cognitive system that has human-like abilities to perceive, classify, judge, desire, and act?”

    It depends on how we quantify “human like”.

    If we mean indistinguishable even in principle I would vehemently disagree that such a thing is possible.

    And I suggest that developing a more robust objective Turing test is the way to determine which of us is right.

    peace

  15. petrushka: I’m quite willing to go out on a limb and predict that cognition will be instantiated in “silicon” but never understood.

    How exactly will you know that cognition has occurred?

    peace

  16. 1. This “deep neural net” thing is so very silly. It boils down to self-promotion. Define your niche narrowly, and then tell a story of yourself as a pioneer, if not a hero. I developed what is now called a deep neural net architecture in my dissertation research (1988-1990), and applied it to speech recognition. You won’t see me editing Wikipedia to indicate that I got there before just about everyone else — because “there” is nowhere remarkably far from neural nets in general.

    2. The kind of neural nets that everyone is talking about do not perform abductive inference, There’s no identifiable reasoning process of any sort. It’s number crunching, not symbol processing (which is not to say that we can never associate symbolic concepts with the processing of neural nets).

    3. The Google doodle classifier is not learning while interacting with you. It’s doing what it’s already been trained to do. ‘(Yes, Google is collecting data to use in further training — offline, not online.) So the observation that the system doesn’t adapt dynamically to something you have learned to do is utterly worthless.

    4. It may be hard to tell, with all the hoopla, but nobody is claiming that deep neural nets work the way that humans do. They perform highly constrained, highly structured tasks. The really impressive work is in large-scale systems like self-driving cars, coordinating a number of AI techniques, including neural nets, that have emerged over the years. IBM’s Watson integrated 25 or so AI techniques, as I recall.

  17. Tom English: The really impressive work is in large-scale systems like self-driving cars, coordinating a number of AI techniques, including neural nets, that have emerged over the years. IBM’s Watson integrated 25 or so AI techniques, as I recall.

    So adding a bunch of techniques that don’t do abductive inference together will somehow yield a system that does do abductive inference.

    You know this how?

    Is there a way to test it to make sure?

    peace

  18. Tom English: So the observation that the system doesn’t adapt dynamically to something you have learned to do is utterly worthless.

    Yet a six year old has no problem recognizing a doodle of a known object with a random line thrown in even if she has never been trained to do so.

    peace

  19. fifthmonarchyman: Yet a six year old has no problem recognizing a doodle of a known object with a random line thrown in even if she has never been trained to do so.

    What? Show your work!

  20. Pedant: What? Show your work!

    I invited you to do it yourself.

    Have a young skull full of mush to sit in your lap while you draw a puppy with a line to the right of it and see if she can recognize it.

    She does this automatically already every time she identifies a puppy in a picture book that has frames around the images.

    With out training she already knows how to distinguish between the puppy and any lines that surround it

    Google quick draw can’t do that.

    peace

  21. fifthmonarchyman: I invited you to do it yourself.

    Have a skull full of mush to sit in your lap while you draw a puppy with a line to the right of it and see if she can recognize it.

    She does this automatically already every time she identifies a puppy in a picture book that has frames around the images.

    FMM, you’re a gem of incoherence. Seems that we’re in the Twilight Zone.

  22. Pedant: FMM, you’re a gem of incoherence. Seems that we’re in the Twilight Zone.

    Why?

    Of course no one is arguing that Goggle could not develop a tool that could distinguish between doodles and lines that surround them,

    But if they did that you could simply defeat it another way. Perhaps by adding dots at random or leaving gaps at certain points.

    The point is as Tom English has said the neural net is not doing abductive inference at all. It’s simply following it’s programing. It has no idea what a drawing of a puppy actually looks like because it does not think the way we do.

    IOW It fails the abductive inference Turing test.

    peace

  23. I’m confident here of two things:

    1) Minds can have different constellations of thought processes, learning abilities, inferential capabilities, confirmation biases, blind spots, intuitions, instinctive reactions, etc. and still be usefully regarded as all being minds. But

    2) A mind can be defined in such a way that only one specific constellation of capabilities and propensities qualifies. Dismissing useful minds as not being minds at all according to some definition is shooting yourself in the foot.

    I also have read that expert systems, which nobody would regard as being a mind of any sort, can nonetheless be of invaluable assistance in solving the sorts of problems for which it was constructed. I saw a case where someone with malaria went to a couple of dozen doctors, none of whom could identify the symptoms. But one doctor had an expert system which diagnosed malaria immediately.

    This can lead to interesting speculations about future approaches to such interactions. When I was young, answering questions google now leads me to in seconds, required long (and often less fruitful) hours in the public library. Now it’s like, rather than ask a librarian for direction, I can ask the books themselves directly.

  24. Flint: Minds can have different constellations of thought processes, learning abilities, inferential capabilities, confirmation biases, blind spots, intuitions, instinctive reactions, etc. and still be usefully regarded as all being minds.

    How exactly would you determine whether or not an entity has/is a mind?

    Flint: A mind can be defined in such a way that only one specific constellation of capabilities and propensities qualifies. Dismissing useful minds as not being minds at all according to some definition is shooting yourself in the foot.

    I think this is a very important point. One that is often missed by those who would deny that a mind is behind the universe.

    peace

  25. fifthmonarchyman: Yet a six year old has no problem recognizing a doodle of a known object with a random line thrown in even if she has never been trained to do so.

    Lots of software comes pre-installed. Eating. Breathing. Pooping.

  26. fifthmonarchyman: How exactly would you determine whether or not an entity has/is a mind?

    Why would I even want to? I find it preferable to deal with the reality rather than trying to apply a label and then deal with the label.

    I think this is a very important point. One that is often missed by those who would deny that a mind is behind the universe.

    I fail to see how this comment is even on topic — unless you are defining a mind so narrowly that neither cats nor gods qualify. But even I would draw the line at imaginary minds. Reality is quite entertaining and enlightening by itself, no need to manufacture labels that don’t apply to any part of it.

  27. fmm mistakes a first generation google experiment for the peak of what AI will be able to accomplish and gets it badly wrong.

    News at 11.

  28. Flint: Why would I even want to?

    So you don’t end up having a relationship with a robot vacuum or a thunderstorm or so you don’t feel guilty when you decommission your laptop

    Flint: But even I would draw the line at imaginary minds.

    So would I, The topic of the thread is determining which minds are real and which are imaginary. We need a test to do that IMO.

    Out of one side of your mouth you say that you don’t want to exclude anything that could possibly be construed as a mind while out of the other you flippantly reject something that the vast majority of humanity already accepts is a mind.

    I’m just curious how you deal with the blatant obvious mental disconnect.

    peace

  29. OMagain: fmm mistakes a first generation google experiment for the peak of what AI will be able to accomplish and gets it badly wrong.

    I think you misunderstand the news isn’t that Google quick draw fails. We all knew it would fail.

    The news is that we have a way of determining what failure looks like.
    IOW a Turing Test

    peace

  30. Flint:
    . . .
    I also have read that expert systems, which nobody would regard as being a mind of any sort, can nonetheless be of invaluable assistance in solving the sorts of problems for which it was constructed. I saw a case where someone with malaria went to a couple of dozen doctors, none of whom could identify the symptoms. But one doctor had an expert system which diagnosed malaria immediately.
    . . . .

    Expert systems are one good example. AI techniques that implement abductive reasoning have been around for literally decades, as shown by this paper. Fifthmonarchyman’s information is more than a little dated.

    Backward chaining has also been around for some time. Probably the best known example of abduction in an AI system is Lenat’s Cyc. Abduction in Cyc, An Overview discusses it in some detail.

  31. Richardthughes: Lots of software comes pre-installed. Eating. Breathing. Pooping.

    Sure. You wouldn’t even be able to boot your computer without it.

    The idea that a computer can think is simply absurd.

    The idea that the brain is a computer just cries out for an answer to who designed the software that’s running on that computer.

  32. fifthmonarchyman: So you don’t end up having a relationship with a robot vacuum or a thunderstorm or so you don’t feel guilty when you decommission your laptop

    Do you really think that attaching a label to something alters your understanding of it? I treat vacuum cleaners as vacuum cleaners, thunderstorms as thunderstorms, and laptops as laptops. That’s what I meant by dealing with the reality rather than some label, which may be inappropriate.

    So would I, The topic of the thread is determining which minds are real and which are imaginary. We need a test to do that IMO.

    Why? If you can’t tell the difference, no test will help you. Indeed, for you the test would only provide the only answer you’d permit before you started.

    Out of one side of your mouth you say that you don’t want to exclude anything that could possibly be construed as a mind while out of the other you flippantlyreject something that the vast majority of humanity already accepts is a mind.

    Once again you deliberately misunderstand. I wish to deal with everything AS IT IS, not with a label. YOU, on the other hand, worship a label that applies to nothing. Which is precisely the danger I’m talking about, when you substitute labels for reality.

    I’m just curious how you deal with the blatant obvious mental disconnect.

    And now you know. Reality is my yardstick. The Will To Believe is your yardstick. You are flailing around in these threads trying to justify what you know is mindless irrationality. Calling reality “mental disconnect” is a futile defense mechanism. Rational people know better.

    peace

  33. Mung: Sure. You wouldn’t even be able to boot your computer without it.

    The idea that a computer can think is simply absurd.

    The idea that the brain is a computer just cries out for an answer to who designed the software that’s running on that computer.

    Whoa! The model we have of a computer – a sharp line between software and hardware – doesn’t apply to how the brain operates. The brain is more analogous to an old-fashioned switchboard, connections are physically hardwired. Learning is a matter of either wiring something up, or rewiring (much more difficult).

    But computers do make decisions, and all (non-trivial) programs make many decisions. Your operating system probably has many millions of conditional jumps. Perhaps if software becomes sophisticated enough, the decision matrix will be able to emulate a biological brain quite closely, but the underlying mechanism will bear no resemblance.

  34. Patrick: Expert systems are one good example.AI techniques that implement abductive reasoning have been around for literally decades, as shown by this paper.Fifthmonarchyman’s information is more than a little dated.

    Backward chaining has also been around for some time.Probably the best known example of abduction in an AI system is Lenat’s Cyc.Abduction in Cyc, An Overview discusses it in some detail.

    I learned about abduction in an AI course, more than three decades ago, and went on to teach it a number of times. I taught students how to implement backward chaining in LISP, starting in 1984. It’s hard to work up a head of steam to engage a sassy, Sarah-Palinesque twerp who misreads even the simplest of comments.

  35. Man some of them are hard. Try drawing “camouflage”, or a Raccoon in 20 seconds. What’s most amazing to me is that it still manages to guess it correctly from what I draw. I’m surprised it couldn’t guess the “lightning”, I tried to give it some hints when there was 2 second left by drawing 2 zigzag lines on either side.

  36. Flint: But computers do make decisions, and all (non-trivial) programs make many decisions.

    Is it the computer that is making the decision or is it the software that is making the decision? Perhaps both are deciding on the same thing at the same time, eh? I wonder who decides which one wins.

  37. Tom English: I learned about abduction in an AI course, more than three decades ago, and went on to teach it a number of times. I taught students how to implement backward chaining in LISP, starting in 1984.

    Lisp? Paisan!

    Being only 40-50 years out of date isn’t bad for creationists, actually.

  38. Mung: Is it the computer that is making the decision or is it the software that is making the decision? Perhaps both are deciding on the same thing at the same time, eh? I wonder who decides which one wins.

    Is it colder in the North or in the winter? When you ask a question, is it you or is it your mind forming the question? Maybe both? Maybe your question is based on a misunderstanding?

  39. Flint: Maybe both? Maybe your question is based on a misunderstanding?

    Maybe I have lots of computer hardware sitting around doing nothing but collecting dust. I wonder what they are thinking about. What decisions they are making as they sit there.

    You really think a telephone switchboard is a good metaphor for the brain?

  40. Tom English: I learned about abduction in an AI course, more than three decades ago, and went on to teach it a number of times.

    Would you say that Abductive inference done by an AI would be indistinguishable from that done by a human? Has this been experimentally evaluated?

    Tom English: It’s hard to work up a head of steam to engage a sassy, Sarah-Palinesque twerp who misreads even the simplest of comments.

    I hope you don’t have to work up a head of steam just to answer a simple question or two.

    Peace

    PS I love you too.

Leave a Reply