AI Skepticism

In another thread, Patrick asked:

If it’s on topic for this blog, I’d be interested in an OP from you discussing why you think strong AI is unlikely.

I’ve now written a post on that to my own blog.  Here I will summarize, and perhaps expand a little on what I see as the main issues.

As you will see from the post at my blog, I don’t have a problem with the idea that we could create an artificial person.  I see that as possible, at least in principle, although it will likely turn out to be very difficult.  My skepticism about AI, is because I see computation as too limited.

I see two problems for AI.  The first is a problem of directionality or motivation or purpose, while the second is a problem with data.

Directionality

Interestingly Patrick’s message, where he asked for this thread, contained a picture of Spock from Star Trek.  As a Star Trek character, Spock was known to be very logical and not at all emotional.  That’s what I think you get with computation.  However, as I see it, something like emotions are actually needed.  They are what would give an artificial person some sense of direction.

To illustrate, consider the problem of learning.  One method that works quite well is what we call “trial and error”.  In machine learning systems, this is called “reinforcement learning”.  And it typically involves having some sort of reward system that can be used to decide whether a trial-and-error step is moving in the right direction.

Looking at we humans, we have a number of such systems.  We have pain avoidance, food seeking, pleasure seeking, curiosity, and emotions.  In the machine learning lab, special purpose reward systems can be setup for particular learning tasks.  But an artificial person would need something more general in order to support a general learning ability.  And I am doubting that can be done with computation alone.

Here’s a question that I wonder about.  Is a simple motivational system (or reward system) sufficient?  Or do we need a multi-dimensional reward system if the artificial person is to be able to have a multi-dimensional learning ability.  I am inclined to think that we need a multi-dimensional reward system, but that’s mostly a guess.

The data problem

A computer works with data.  But the more I study the problems of learning and knowledge, the more that I become persuaded that there isn’t any data to compute with.  AI folk expect input sensors to supply data.  But it looks to me as if that would be pretty much meaningless noise.  In order to have meaningful data, the artificial person would have to find (perhaps invent) ways to get its own data (using those input sensors).

If I am right about that, then computation isn’t that important.  The problem is in getting useful data in the first place, rather than in doing computations on data received passively.

My conclusion

A system built of logic gates does not seem to be what is needed.  Instead, I have concluded that we need a system built out of homeostatic processes.

165 thoughts on “AI Skepticism

  1. Even your basic logic chip does categorization of its input signals to decide whether to treat the inputs as 0 or 1 bits.

    Exactly. So logic gates can categorize, assign symbols, and process information.

    What’s missing? Why do you think that “a system built of logic gates does not seem to be what is needed” for AI?

  2. The ability of a logic gate to categorize is fixed at the factory. The ability of neuronal systems to categorize is adaptive. That adaptivity is important for adaptive learning.

  3. If there were any reason to do so, you could build logic gates with variable and adaptive thresholds. However, it makes more engineering sense to leave the thresholds fixed and implement the adaptability at the system level rather than at the gate level.

    For example, artificial neural networks are typically implemented as software running on processors built out of — you guessed it — logic gates. Faster versions of the same models can be implemented directly in hardware using FPGAs or even ASICs, both of which are also built out of logic gates. These systems are fully adaptive, despite the fact that the individual logic gates are not.

    Everything we’ve discussed in this thread — homeostasis, categorizing, symbolizing, information processing, adaptation — can be done by systems based on logic gates.

    So to repeat my question: What’s missing? Why do you think that “a system built of logic gates does not seem to be what is needed” for AI?

  4. Ah, this brings back memories.

    keiths vs. Zachriel.

    keiths obvioulsy wrong, incapable of admitting it, browbeating his opponent.

    Good luck Neil.

  5. Mung,

    keiths obvioulsy wrong…

    Joe G is rubbing off on you. For most people, that wouldn’t be a good thing.

    Anyway, since I’m ‘obvioulsy’ wrong, let’s hear you explain why.

  6. keiths: So to repeat my question: What’s missing?

    I already answered that in the original post.

    Why does this bother you so much?

    I am not campaigning against AI. It does not bother me that some of the taxes that I pay go to support AI research. I only started this topic because I was asked to explain why I disagree with computationalism. I was not expecting to persuade anybody to change their minds.

    So why does this bother you? Why are you opposed to there being a diversity of viewpoints on this. It is not as if it is a settled issue. For myself, I welcome the diversity.

  7. Neil,

    This is The Skeptical Zone. Why are you surprised when someone challenges a controversial claim you make in an OP? And not just some offhand claim, but the main conclusion of your post!

    My conclusion

    A system built of logic gates does not seem to be what is needed. Instead, I have concluded that we need a system built out of homeostatic processes.

    You go on:

    So why does this bother you? Why are you opposed to there being a diversity of viewpoints on this. It is not as if it is a settled issue. For myself, I welcome the diversity.

    When someone questions you, it doesn’t mean they are “opposed to there being a diversity of viewpoints.” It just means they disagree with you!

    Good grief, Neil. If you can’t or won’t defend your thesis, that’s fine. Just say so, but don’t pretend that I’m suppressing dissent by questioning you.

  8. When someone questions you, it doesn’t mean they are “opposed to there being a diversity of viewpoints.” It just means they disagree with you!

    That you disagreed was already clear.

    So then you ask another question. But it is the same question that you have asked before, and that I have already answered.

    No, that is not just disagreement. That’s more of a stubborn insistence.

    If you expect a formal logical proof that AI couldn’t work, then I don’t have one. A formal logical proof requires agreed premises. There is no agreement on premises.

    For that matter, you don’t have a formal proof that AI could work, and for similar reasons. And the empirical evidence is against it. We have far exceeded the processor and memory requirements that Turing thought would be sufficient, with no sign of real progress.

  9. Neil,

    So then you ask another question. But it is the same question that you have asked before, and that I have already answered.

    No, that is not just disagreement. That’s more of a stubborn insistence.

    If you can see what is wrong with the following hypothetical dialogue, you’ll recognize what is wrong with your statement above:

    ID Critic: Can you give a reason for why complex life forms can’t be the product of unguided evolution?

    ID Supporter: The odds are too steep. Tornadoes in junkyards don’t produce 747s.

    ID Critic: That’s a bad analogy. [Insert lengthy explanation of Hoyle’s Fallacy here.] Taking all of that into account , can you give a reason for why complex life forms can’t be the product of unguided evolution?

    ID Supporter: I’ve already answered that question. Why are you so stubbornly insistent?

  10. Neil,

    If you expect a formal logical proof that AI couldn’t work, then I don’t have one.

    No, I’m just asking if you can support your claim about the insufficiency of logic gates in particular, and computation more generally, as the basis of an artificially intelligent system.

    Everything you’ve mentioned so far — homeostasis, categorizing, symbolizing, information processing, adaptation — can be accomplished by systems based on logic gates

    For that matter, you don’t have a formal proof that AI could work…

    I’ve never claimed to. I’m open to the idea that AI might be impossible. I just haven’t seen any good evidence for that claim.

    And the empirical evidence is against it. We have far exceeded the processor and memory requirements that Turing thought would be sufficient, with no sign of real progress.

    I think IBM’s Watson is “real progress”. Don’t you?

  11. No, I’m just asking if you can support your claim about the insufficiency of logic gates in particular, and computation more generally, as the basis of an artificially intelligent system.

    You call it a claim. I have not said that. I have been expressing my opinion, and the reasons for it. I have not been attempting to persuade anybody. On the other hand, you are trying to persuade. So where’s your persuasive argument?

    Everything you’ve mentioned so far — homeostasis, categorizing, symbolizing, information processing, adaptation — can be accomplished by systems based on logic gates

    When you have that all working, come back and demonstrate it. In the meantime, you’ve got nothing (other than 60 years of consistent failure).

    I think IBM’s Watson is “real progress”. Don’t you?

    It’s a success for the power of brute force, but a failure as a demonstration of artificial intelligence.

  12. For one thing, I have given a lot more detail than your hypothetical ID supporter. For another, people usually only argue against ID supporters who are trying to persuade, and I am not attempting to persuade anybody.

  13. Neil,

    You call it a claim. I have not said that. I have been expressing my opinion, and the reasons for it.

    Give me a break, Neil. When you put a statement under the heading “My conclusion“, you are making a claim.

    And even if it weren’t a claim, but merely an “opinion, and the reasons for it,” so what? This is The Skeptical Zone, and it is just as possible to be skeptical of an opinion as of a claim.

    Why should your claims, statements, opinions or musings be exempt from criticism and questioning?

    When you have that all working, come back and demonstrate it.

    It’s already been done. Homeostasis — digital thermostats. Categorizing — logic gates, by your own admission. Symbolizing — logic gates in a digital temperature sensor. Information processing — computers. Adaptation — artificial neural networks (in both hardware and software forms).

    Can you name something that is essential for artificial intelligence that cannot be carried out by a system based on logic gates?

  14. Looking back, the problem is already evident in our first exchange in this thread.

    keiths:

    Neil,

    A system built of logic gates does not seem to be what is needed. Instead, I have concluded that we need a system built out of homeostatic processes.

    Does that mean that you don’t think homeostatic processes can be implemented using logic gates? If so, why?

    Neil:

    If a system built of logic gates is processing (doing anything), then it is switching states. I guess it depends on whether you consider that to count as stasis.

    It seems clear in retrospect that you were confusing the controller with the thing being controlled. In a homeostatic system, it is the thing being controlled that is static (or nearly so). The controller itself may be far from static.

    The logic gates in a digital thermostat aren’t in stasis, but that doesn’t negate the fact that the system is homeostatic. It’s purpose is to keep the temperature constant, after all, and that is the standard textbook example of homeostasis.

  15. It’s already been done. Homeostasis — digital thermostats. Categorizing — logic gates, by your own admission. Symbolizing — logic gates in a digital temperature sensor. Information processing — computers. Adaptation — artificial neural networks (in both hardware and software forms).

    That’s a weasel answer. It should have been obvious that I was talking about getting a fully working artificial person.

    Emulation only ever emulates selected properties or behaviors. The claim that you can emulate what’s needed is empty until you are sure of what details need to be emulated. And you haven’t a clue as to what needs to be emulated. Your answers are driven by ideology, not by knowledge of the requirements.

  16. Neil Rickert: You call it a claim.I have not said that.I have been expressing my opinion, and the reasons for it.I have not been attempting to persuade anybody.

    I will interpret your statements metaphorically, free of truth-values, but hinting at something transcendent.

  17. I will interpret your statements metaphorically, free of truth-values, but hinting at something transcendent.

    Transcendent — no. But hinting at very different ideas about mind and knowledge than those that keiths is considering.

  18. Neil,

    Emulation only ever emulates selected properties or behaviors. The claim that you can emulate what’s needed is empty until you are sure of what details need to be emulated.

    I haven’t made any such claim. I’ve been asking about your claim:

    My conclusion

    A system built of logic gates does not seem to be what is needed. Instead, I have concluded that we need a system built out of homeostatic processes.

    You’re dodging the question:

    Can you name something that is essential for artificial intelligence that cannot be carried out by a system based on logic gates?

    It’s your claim. Can you justify it? If not, that’s fine — but have the courtesy to admit it, instead of pretending that my question is illegitimate.

  19. You’re dodging the question:

    Can you name something that is essential for artificial intelligence that cannot be carried out by a system based on logic gates?

    That’s dishonest. I have answered that. It is answered in the original post. It has been answered in comments. You don’t agree with my answer, so you keep asking the same question, to the point of it becoming harassment.

    It’s your claim that everything needed can be built out of logic chips. So just go build an artificial person already. Put up or shut up.

    I’m done with responding to your harassment.

  20. Neil,

    I hope you realize that your accusation of harassment is ridiculous.

    In a productive discussion, each person responds to what the other person says. I have responded to each of your points, explaining exactly why homeostasis, categorizing, symbolizing, and adaptation are well within reach for systems based on logic gates. Can you respond to my points or not?

    The ball is in your court. You’ve already tried the following:

    a) pretending that by asking questions, I am trying to stifle dissent;

    b) claiming that your OP is exempt from criticism because you are only “expressing [your] opinion, and the reasons for it”;

    c) attributing claims to me that I have not made (and you’ve done it again. See below); and

    d) accusing me of harassment for merely asking a question again after explaining why your previous answers don’t work.

    Those strategies didn’t work very well. How about trying one of the following?

    e) if you disagree with my points about homeostasis, categorizing, etc., then explain why you think I’m wrong and why those things really are out of reach for systems based on logic gates; or

    f) come up with a different reason why systems based on logic gates cannot achieve AI; or

    g) admit that you can’t back up your claim, and go off and think about it some more.

    You also might want to think about why you are so sensitive to criticism of your ideas.

  21. It’s your claim that everything needed can be built out of logic chips. So just go build an artificial person already. Put up or shut up.

    I haven’t made that claim. Why are you attributing it to me? This seems to be a continuing bad habit of yours.

    I am questioning your assertion about the inadequacy of systems based on logic gates, which was the conclusion of your OP. Can you defend it?

  22. (I apologize if this’d is a duplicate reply.)

    Neil, can I ask if you think a human quality mind can be implemented using only material objects? These would include gates, but also analog and other material components. (If I remember right, the human brain even uses chemical signals.). Or do you believe something magical is also needed?

    You say that “directionality or motivation or purpose” are important and I agree, but I think those problems were being solved long before anything remotely like a brain existed. For example, I’ve read of a single celled organism with two light sensors and two flagella. More light hitting a sensor provides more output from it. If the sensor output is connected to a flagellum, it makes it flagellate faster.

    If you wire the left sensor to the left flagellum and the right sensor to the right flagellum, you get an organism that turns away from light. If you cross the connections, the organism travels towards the light.

    Presto! You have two different goal seeking organisms – one seeks light, one seeks darkness. You have intentionality and goal seeking before you even have a brain.

    Whatever goes into producing intentionality and goal seeking can be found amongst the “lower” animals. If you agree that lower animals don’t need magic for intentionality and goal seeking behavior, it’s hard to argue that human intellects do.

  23. Neil, can I ask if you think a human quality mind can be implemented using only material objects?

    I don’t like the way you worded that.

    I think you are really asking about something that I already answered in the original post, where I said: “As you will see from the post at my blog, I don’t have a problem with the idea that we could create an artificial person. I see that as possible, at least in principle, although it will likely turn out to be very difficult.”

    You asked about “implement”. In all honesty, I think the problem is too difficult for us to be likely to succeed in any attempt at implementation.

    You say that “directionality or motivation or purpose” are important and I agree, but I think those problems were being solved long before anything remotely like a brain existed.

    I agree. And I agree with your examples. I’m inclined to say that there is more intentionality to be found in a flagellate, than will ever be found in a logic gate.

    If you agree that lower animals don’t need magic for intentionality and goal seeking behavior, it’s hard to argue that human intellects do.

    I’m puzzled that you even say that. I have not been arguing that humans need magic. Roughly speaking, you could say my view is that them human intellect depends on lots of the kind of stuff that flagellates and other micro-organisms can do, and logic gates do not provide those abilities. AI equates intelligence with logic; my skepticism about AI, is that I see that as a false equivalence.

  24. Neil,

    Throughout the thread, you’ve been confusing the properties of a system with the properties of its individual components.

    You pointed out that logic gates aren’t in stasis when they are processing, but the real question is whether systems based on logic gates can be homeostatic. (They can.)

    (There’s also a second confusion there. You are mistaking the homeostatic controller for the thing being held in stasis, as well as confusing the properties of the controller with the properties of its parts.)

    You argued that logic gates can’t exhibit intentionality, when the real question is whether systems based on logic gates can do so. (They can.)

    Now you’re making a similar mistake with regard to logic and intelligence:

    AI equates intelligence with logic; my skepticism about AI, is that I see that as a false equivalence.

    AI does not equate intelligence with logic. Logic gates implement logic functions, of course, but that does not mean that systems based on logic gates can only do logic.

    You made the same mistake in the OP:

    Interestingly Patrick’s message, where he asked for this thread, contained a picture of Spock from Star Trek. As a Star Trek character, Spock was known to be very logical and not at all emotional. That’s what I think you get with computation. However, as I see it, something like emotions are actually needed. They are what would give an artificial person some sense of direction.

    eigenstate explained your error:

    On the emotion, thing, there’s a problem of (inadvertent) equivocation on the word “logic” that comes up regularly in these discussions. Your Spock reference highlights the equivocation. In a colloquial sense, Spock is “logical” because he’s minimally emotional (although that’s a misconception about emotions as well, but I won’t bother with that here). But the most emotional “illogical” person you might compare Spock to is exactly as logical as Spock or anyone else in a computational sense of the term, and the computational sense of “logic” is what we are focusing on here, right.

    That is, emotion is computation in as thoroughgoing sense as mathematical figuring in one’s head. Emotional responses are driven by stimuli and interaction with the brain, nervous system and other parts of the body in rule-based, physical ways.

  25. So how do we get from components to AI? This seem to one of those fusion power things, where there is no material obstacle other than inventing the configuration.

    My AI skepticism lies in thinking the inventing will be very difficult.

  26. It is probably a mistake for me to reply to this.

    Throughout the thread, you’ve been confusing the properties of a system with the properties of its individual components.

    Quite right. I should just pray. And if I pray hard enough, a complete working artificially intelligent system will just pop into existence. There’s no need for me to think about implementation details.

    </sarcasm>

    You’ve made that same false charge before. I ignored, because it was absurd.

    All you are doing is ascribing to me some views that I do not have, and then criticizing me for what you falsely ascribe to me.

    Just stop it already. You are wasting everybody’s time.

    …, but the real question is whether systems based on logic gates can be homeostatic. (They can.)

    You have argued this several times. What you argue is irrelevant.

    I am not saying that homeostasis is a good thing in its own right. Rather, I am saying that homeostasis is what seems to be needed to provide a general autonomous learning ability. Implementing homeostasis with logic chips does nothing useful, unless doing so confers a general autonomous learning ability. Machine learning research, based on logic, has not demonstrated anything comparable to the learning ability of a human child. So your point about logic gates and homeostasis fails to address the issue. The “no free lunch” theorems of Wolpert & McReady, and what Vapnik shows in “The Nature of Statistical Learning Theory” both suggest a big problem for using logic gates to provide the kind of autonomous general learning that we should require.

    In an earlier post, you pointed to Watson as a success. But the programmers admitted that most of the learning by Watson was really learning by the programmers, who programmed those “learned abilities” into Watson.

  27. Watson is a useful technology, but it’s a dead end. What we lack is not hardware but knowledge of how to emulate general learning.

  28. Neil,

    Quite right. I should just pray. And if I pray hard enough, a complete working artificially intelligent system will just pop into existence. There’s no need for me to think about implementation details.

    &lt/sarcasm>

    Who said anything about ignoring implementation details?

    All you are doing is ascribing to me some views that I do not have, and then criticizing me for what you falsely ascribe to me.

    That’s richly ironic, given that you just falsely ascribed a view to me (that implementation details could be ignored), not to mention your earlier claims that I am a dualist and that I am “opposed to there being a diversity of viewpoints on this.”

    keiths:

    …, but the real question is whether systems based on logic gates can be homeostatic. (They can.)

    Neil:

    You have argued this several times. What you argue is irrelevant.

    It’s directly relevant to your claim:

    My conclusion

    A system built of logic gates does not seem to be what is needed. Instead, I have concluded that we need a system built out of homeostatic processes.

    You attempt to defend your claim:

    Implementing homeostasis with logic chips does nothing useful, unless doing so confers a general autonomous learning ability. Machine learning research, based on logic, has not demonstrated anything comparable to the learning ability of a human child. So your point about logic gates and homeostasis fails to address the issue. The “no free lunch” theorems of Wolpert & McReady, and what Vapnik shows in “The Nature of Statistical Learning Theory” both suggest a big problem for using logic gates to provide the kind of autonomous general learning that we should require.

    I don’t see the “big problem”. Could you elaborate?

    It’s also interesting that you see NFL as a problem for computers, but not for brains. Why do you think brains are exempt?

    Last, you are repeating the mistake that IDers make when invoking NFL against evolution. As Wolpert put it last year:

    …the primary importance of the NFL theorems for search is what they tell us about “the underlying mathematical ‘skeleton’ of optimization theory before the ‘flesh’ of the probability distributions of a particular context and set of optimization problems are imposed”. So in particular, while the NFL theorems have strong implications if one believes in a uniform distribution over optimization problems, in no sense should they be interpreted as advocating such a distribution.

  29. petrushka,

    So how do we get from components to AI? This seem to one of those fusion power things, where there is no material obstacle other than inventing the configuration.

    My AI skepticism lies in thinking the inventing will be very difficult.

    That’s the big question. The early AI pioneers were, in retrospect, comically optimistic about the timeline.

    I’m a bit more sanguine than you are about the prospects for designed AI. You write:

    So my prediction is that even with artificial neurons as a given, AI will be a slow and evolutionary climb, characterized by “random” variation, selection and extinction. I don’t see any “rational” path.

    I like the idea of using evolutionary methods as part of the design process, but I don’t think we have to evolve AI from scratch. I think the right approach is to reverse engineer existing intelligences, thus taking advantage of the work that evolution has already done over millions of years.

    Then the task of crafting an artificial intelligence could be partly deliberate design, partly the use of modules “pre-designed” by evolution, and partly evolution in real time.

  30. Works for OOL, doesnt it?

    Seriously, I think AI guys haven’t looked at how hard it is to model chemistry. And it is chemistry you are emulating. The information processing metaphor is overextended.

    I’d be quite happy to be wrong.

  31. Petrushka,

    I think the chemistry is incidental, and that what’s important is the behavior of the neurons, not their inner workings.

    At times you have seemed to agree, as when you wrote:

    The problem may not be the silicon neuron (google brains in silicon), but the network that is emergent. Al the problem solving networks I am aware of are evolved.

    Other times you’ve seemed to indicate a belief that neurons do something important that models will never duplicate:

    I do.

    Think neurons do something that is non- computable. I’m not sure it is “essential” to AI, but I think it makes artificial humans unlikely.

    How do you reconcile those positions?

  32. I think brains are emergent. I’m not convinved that the emergent behavior can easily be replicated in silicon. And even if the hardware can be done I think the software is beyond our ability to analyze.

    This is parallel to my argument that biological design is “impossible.”
    This does not mean that Watson like systems won’t be developed and be very useful, but I don’t see a path from them to AI.

  33. keiths: It’s also interesting that you see NFL as a problem for computers, but not for brains. Why do you think brains are exempt?

    There’s no doubt that NFL is a problem for computational search. However, I do not see the brain as doing either computation or search. So NFL isn’t relevant (and logic gates aren’t useful).

  34. Neil,

    There’s no doubt that NFL is a problem for computational search.

    Only “if one believes in a uniform distribution over optimization problems,” as Wolpert explains. Do you believe that? If so, how do you justify your belief?

    However, I do not see the brain as doing either computation or search. So NFL isn’t relevant (and logic gates aren’t useful).

    Which gets us back to the question you hate so much: Can you name something — anything — essential to intelligence that brains can do but systems based on logic gates cannot? So far, you haven’t done so.

  35. Petrushka,

    First, to say that an intelligent system was produced by evolution is different from saying that a system cannot be intelligent unless it has the ability to evolve. The latter is clearly not true. Brains don’t become unintelligent merely because their possessors are infertile, for example.

    If you meant that intelligence is something that can only arise via evolution, not by design, then that doesn’t answer the question, which is: What characteristic(s) essential for intelligence does a brain have that could never be duplicated in a system based on logic gates?

    Second, evolution isn’t out of reach for systems based on logic gates:

    Evolving artificial neural networks

  36. You keep going back to hardware, which I think is a barrier but not prohibitive. What I think is a problem is the behavior of the system.

    My understanding is we already have a supercomputer that can emulate the quantity of neurons in the the human brain, and their connections. But it doesn’t do anything.

  37. You keep going back to hardware…

    Because the conclusion of Neil’s OP is that AI can’t be implemented by a system base on logic gates.

    My understanding is we already have a supercomputer that can emulate the quantity of neurons in the the human brain, and their connections.

    No, far from it. This is the largest one I’m aware of.

  38. Ok, one percent. But the interconnections took hundreds of millions of years to evolve. I’m simply skeptical there are easy shortcuts.

  39. Ok, one percent. But the interconnections took hundreds of millions of years to evolve. I’m simply skeptical there are easy shortcuts.

    So am I. But there are some harder shortcuts available, including the one I mentioned before. Rather than trying to re-evolve the brain, we should reverse engineer it as much as possible.

  40. You keep going back to hardware, which I think is a barrier but not prohibitive. What I think is a problem is the behavior of the system.

    That’s keiths. I agree with you on the importance of behavior. But whenever it try to move the discussion in that direction, keiths comes back with a demand that I explain why logic gates couldn’t be used. He is doing a kind of mini Gish gallop.

    I’m not surprised at this. Many (but not all) AI proponents are very ideological.

  41. Neil,

    But whenever it try to move the discussion in that direction, keiths comes back with a demand that I explain why logic gates couldn’t be used.

    That’s harassment! How dare anyone challenge the conclusion of an OP! The first rule of The Skeptical Zone, as everyone knows, is that absolutely no skepticism is allowed — especially if the OP is authored by Neil Rickert.

    Neil, I suggest you take a few deep breaths, put yourself in an objective frame of mind, and reread the comments in this thread. You might want to keep track of things like a) who labels the other’s views as “nonsense”; b) who bristles at being challenged; c) who refuses to take responsibility for his own statements; d) who calls the other an ideologue, etc.

    You won’t like what you learn about yourself, but it may motivate you to change.

  42. keiths: How dare anyone challenge the conclusion of an OP!

    But you have not actually challenged anything. You are just being repetitively annoying. All you do is push your ideology of logic chips über alles. Since I have not suggested that other people abandon their current approach to AI, this seems pointless.

  43. petrushka,

    Could you reverse engineer a cell in silicon?

    I assume you mean “Could you reverse engineer a cell down to the level of chemistry and then re-implement it in silicon?”

    If so, then my answer is yes, in that entire cells have already been successfully simulated.

    However, I don’t think that’s the right approach for AI, and I’m unaware of anyone who does. Better to use more abstract models of neurons, but with connection schemes inspired by biology.

  44. Neil,

    But you have not actually challenged anything.

    ??

    You are just being repetitively annoying.

    That’s because you find it annoying that I’m challenging your thesis — particularly since you seem unable to defend it.

    All you do is push your ideology of logic chips über alles.

    I don’t subscribe to that ideology. I think we should use whatever works, whether that be logic gates, artificial neurons, or something else entirely.

    I am challenging your claims about logic gates. You have said, in turn, that homeostasis, categorizing, symbolizing, and adaptation cannot be accomplished by systems based on logic gates. I have shown you that they can.

    You have also claimed that the NFL theorems “suggest a big problem” for AI systems based on logic gates. As Wolpert himself points out, such a claim depends on assumptions about the distribution. You haven’t justified those assumptions.

    Given all of that, do you still stand behind your claim? If no, are you honest enough to admit that? If yes, can you defend your statement, or will you continue dishonestly to blame me for having the audacity to challenge it?

  45. I think we should use whatever works, whether that be logic gates, artificial neurons, or something else entirely.

    Then that only confirms that you are not challenging anything.

    I am challenging your claims about logic gates. You have said, in turn, that homeostasis, categorizing, symbolizing, and adaptation cannot be accomplished by systems based on logic gates.

    Looking back at the OP, I did not say that. What I said was a lot weaker than that.

    I’ll repeat that you haven’t actually challenged anything, but you have made a lot of noise and wasted time for both of us.

    Given all of that, do you still stand behind your claim?

    What claim?

    I still hold the same opinion, and for the same reasons that I outlined. You have not provided any basis at all for changing my opinion.

Leave a Reply