AI Skepticism

In another thread, Patrick asked:

If it’s on topic for this blog, I’d be interested in an OP from you discussing why you think strong AI is unlikely.

I’ve now written a post on that to my own blog.  Here I will summarize, and perhaps expand a little on what I see as the main issues.

As you will see from the post at my blog, I don’t have a problem with the idea that we could create an artificial person.  I see that as possible, at least in principle, although it will likely turn out to be very difficult.  My skepticism about AI, is because I see computation as too limited.

I see two problems for AI.  The first is a problem of directionality or motivation or purpose, while the second is a problem with data.

Directionality

Interestingly Patrick’s message, where he asked for this thread, contained a picture of Spock from Star Trek.  As a Star Trek character, Spock was known to be very logical and not at all emotional.  That’s what I think you get with computation.  However, as I see it, something like emotions are actually needed.  They are what would give an artificial person some sense of direction.

To illustrate, consider the problem of learning.  One method that works quite well is what we call “trial and error”.  In machine learning systems, this is called “reinforcement learning”.  And it typically involves having some sort of reward system that can be used to decide whether a trial-and-error step is moving in the right direction.

Looking at we humans, we have a number of such systems.  We have pain avoidance, food seeking, pleasure seeking, curiosity, and emotions.  In the machine learning lab, special purpose reward systems can be setup for particular learning tasks.  But an artificial person would need something more general in order to support a general learning ability.  And I am doubting that can be done with computation alone.

Here’s a question that I wonder about.  Is a simple motivational system (or reward system) sufficient?  Or do we need a multi-dimensional reward system if the artificial person is to be able to have a multi-dimensional learning ability.  I am inclined to think that we need a multi-dimensional reward system, but that’s mostly a guess.

The data problem

A computer works with data.  But the more I study the problems of learning and knowledge, the more that I become persuaded that there isn’t any data to compute with.  AI folk expect input sensors to supply data.  But it looks to me as if that would be pretty much meaningless noise.  In order to have meaningful data, the artificial person would have to find (perhaps invent) ways to get its own data (using those input sensors).

If I am right about that, then computation isn’t that important.  The problem is in getting useful data in the first place, rather than in doing computations on data received passively.

My conclusion

A system built of logic gates does not seem to be what is needed.  Instead, I have concluded that we need a system built out of homeostatic processes.

165 thoughts on “AI Skepticism

  1. I’m recovering from surgery, so I haven’t been posting much, and until today all of it was flat on my back with a tablet. So today I’ll try one last time to make myself understood. I really have no emotions invested in being right. Everything on this thread is speculation. I would just like to be understood.

    I have no problem with the physics or metaphysics of AI. I am pessimistic about the timeline. I’m pessimistic about a lot of technology timelines. I see the possibility of a gap in human space travel. If it’s picked up at all it will be to build prestige for some country like China. Faster than light travel will not happen, ever. Fusion power will remain a dream for many, many decades. Cancer will not be cured. War will not end. Inner cities will look like post-apocalyptic movie sets. (Come to think of it, they already are such sets.)

    The specific reason I am pessimistic about AI is that I think we do not know how to emulate chemistry. (What’s that got to do with anything, I hear you cry.) Everything, I reply.

    This goes back to all those discussions we had with UprightBiped. I argued, and still argued, that he reified a metaphor and argued from the properties of the metaphor rather than the properties of chemistry. He abstracted DNA and translation into information processing and then argued about the limitations imposed by information processing.

    I’m sorry, but you lose properties when you abstract things. You lose things when you abstract food into constituent components and try to reassemble them into processed Food Helper.

    Both processed food and software emulations vary in quality and thoroughness. Apparently we can do a satisfactory job of emulating a thermonuclear bomb, and we are getting better at emulating weather systems.

    But as we have been saying to IDists for a long time, the map is not the territory. Emulations diverge very quickly from reality.

    I’m not aware that we have any functional emulations of brains. I’d like to think that somewhere there is an iEarthworm or iMosquito app. Might be, but I haven’t seen it.

    It is my opinion (just an opinion) that I-ness evolves. I see no shortcut to the view that organisms are whole, reproducing units, of which a brain is a part. No matter how small or rudimentary, a brain is not a data processor. It is part of a complex chemical network that is unitary. The brain evolves only if the entire unit is successful.

    It is impossible to predict what a brain needs to do. Just as it is impossible to predict what any coding sequence needs to do. Changes happen and some are more successful than others.

    The emulations I have seen I would call cargo-cult AI. They emulate a subset of the observable behavior of a system, but do not emulate an evolving organism.

    Abstraction is a wonderful thing. We learn the abstract principles of flight, and we build airplanes. We will eventually do the same thing with brains. We will build cybernetic problem solvers that will be very powerful and useful (or destructive).

    But I don’t think we are on the track that leads the Star Trek’s Data or Any of the Asimov I-robots. The I-ness of such beings will have to evolve to serve the survival of such beings.

    Perhaps we can kick-start it and perhaps not.

    I started this rant by talking about chemistry. That is because I think there is a germ of truth in the Penrose conjecture. I don’t accept the quantum woo, but I think that chemistry evolves in ways that will be difficult to emulate.

    It’s the problem of tar. One problem with early OOL theories is that everything produced by the Miller-Urey turns to tar. We are gradually moving beyond that, but progress is slow.

    I think an equivalent kind of tar occurs when we try to scale up GAs. I suspect we can get beyond it, but I doubt if we can transfer much of the engineering from biology. I could be wrong.

    It’s a long post and I an reaching the limits of my stamina. I wouldn’t be surprised if I haven’t stretched yours past its limit.

    All I expect to accomplish is to go on record with a reasonable exposition of my prejudices and foibles. It’s all just speculation.

  2. petrushka,

    I’m sorry, but you lose properties when you abstract things.

    Don’t apologize. That’s an essential point!

    I’ll respond in more detail later. For now, wishing you a speedy recovery.

  3. I argued, and still argued, that he reified a metaphor and argued from the properties of the metaphor rather than the properties of chemistry. He abstracted DNA and translation into information processing and then argued about the limitations imposed by information processing.

    Exactly.

    Best wishes for a speedy recovery.

  4. Thanks. Back to the tablet, so I won’t be able to post KF sized screeds.

    Expanding on the invalid use of metaphors, I think I may have something in common with Neil, which is I don’t think brains do information processing. So Information processing systems like Watson cannot emulate brains.

    Where I may differ from Neil is I think it is possible to emulate brains with silicon; I just don’t think there will be much reverse engineering. I think the media are so different in properties that getting past tar in one will not transfer to the other.

    I think chemistry implements a massive Markov selector, and that selector enables and drives evolution. I don’t at the moment see that happening in silicon logic, though I think we will see lots of Watsons being sold as expert system. First planned roll-out is medical diagnostics. That’s been a holy grail for a long time.

    Stock trading is already in place, but it has a few reliability problems.

  5. petrushka,

    It’s true that you lose properties when you abstract things. The key is to retain the essential properties while abstracting away the nonessentials.

    Which properties are essential? That depends on the system you’re modeling and on what your model is trying to accomplish. I work on a chip design team, and we use models ranging from the very concrete to the very abstract.

    Different levels of abstraction are needed because there is a tradeoff between the level of detail and the resources consumed by the model. Detailed models are resource hogs. They are harder to set up and they use more memory and compute time than the abstract models.

    We end up using detailed models for critical areas such as custom circuits, intermediate models for timing analysis, and abstract models for functional verification, where we are only concerned with the logical correctness of the design.

    How do we get away with using abstract models for functional verification? In a phrase, it’s because a gate is a gate is a gate, regardless of the medium in which it is implemented. An OR gate will assert its output if any of the inputs are asserted, and this is true whether the OR gate is implemented in silicon, germanium, relays, or tinkertoys. The logical behavior is essential to the abstract model, but the implementation details are not.

    Chips are artificial creations, of course, and part of the reason that we can model them abstractly is that we deliberately design them that way. Biological neural networks might be a different story, since they evolved rather than being designed. However, there are some fairly strong indications that this is not so, and that neurons, like gates, can be modeled abstractly with good results.

    We can discuss that in future comments if you’d like.

  6. My ony disagreement with you is that I’m a pessimist about the level of difficulty, and I’m skeptical about reverse engineering.

    We build airplanes rather than birds. In the realm of AI we will build task specific devices that will do grunt work. Maybe there will be some synergy along the way.

    The way AI devices will engage in selection is via profitability. Unexpected things will happen. Who would have guessed that computer evolution would be driven by video games, Netflix and by telephones.

  7. petrushka,

    I’m just explaining why I don’t share your pessimism:

    The specific reason I am pessimistic about AI is that I think we do not know how to emulate chemistry.

    We don’t need to emulate chemistry if abstract models of neurons will suffice, and there’s good evidence that they will.

    Even evolutionary models generally don’t need to simulate chemistry. For example, we can simulate the time to fixation for an allele without knowing the details of the underlying chemistry.

    As I said above, it’s a matter of retaining the essential properties while abstracting away the nonessentials. The chemistry will be essential for some models but not for others.

  8. Whoop de do, that we can simulate time fixation. Yes, it’s an important confirmation of theory, but is isn’t artificial life.

    And AI will do for mental drudgery what Caterpillar did for backbreaking labor.

    But if our aim is an artificial brain that will be self conscious, feel pain, experience qualia — it will have to evolve, regardless of the physical substrate. The environment will have to present reinforcement for self awareness, and the substrate will have to support evolution.

    Remember all those thousands of dimensions of selection we hit IDiots with? Same thing.

  9. petrushka,

    Whoop de do, that we can simulate time fixation. Yes, it’s an important confirmation of theory, but is isn’t artificial life.

    It shows that we don’t necessarily have to emulate chemistry. Some models require it, while others don’t. Do you have evidence that AI requires it?

    But if our aim is an artificial brain that will be self conscious, feel pain, experience qualia …

    You’re shifting the goalposts. We’ve been talking about artificial intelligence, not artificial consciousness.

    …it will have to evolve, regardless of the physical substrate. The environment will have to present reinforcement for self awareness, and the substrate will have to support evolution.

    I don’t see why the substrate has to support evolution. The Big Dog robot can’t reproduce and evolve, but it was inspired by evolved quadruped locomotion. Why can’t AI follow a similar path?

  10. Impressive piece of robotics. “Can’t evolve” you say? I’ll bet a lot of trial-and-error went into producing the current model!

  11. That’s “intelligently guided evolution”. 🙂

    The best part is when that guy tries to kick it over. You almost feel sorry for it.

  12. Perhaps my categories don’t align with yours. I have, from the start of this thread, considered AI to man strong AI. By which I mean self aware and experiencing qualia. I thought I made it clear the I believe we will make steady progress in commercial (and military) applications of data processing and expert systems. Not to mention robotics, including autonomous robotics. I will be interested to see if a “pure” neural network can learn something like flight control. If this has already happened, consider me impressed.

    By evolution I would accept “self” modification via trial and error learning without necessarily involving physical replication.

  13. petrushka,

    Perhaps my categories don’t align with yours. I have, from the start of this thread, considered AI to man strong AI. By which I mean self aware and experiencing qualia.

    Sure, we’ve been discussing strong AI, but strong AI doesn’t require the presence of qualia. The standard benchmark for strong AI is the Turing Test, and a system can pass the Turing Test with or without qualia.

    How would you even test for the presence of qualia?

    I will be interested to see if a “pure” neural network can learn something like flight control. If this has already happened, consider me impressed.

    This is even funkier:

    In this paper, we report the results of an experiment using a living neuronal network as a matrix of weights that we can measure and manipulate in a real-time feedback control system to stabilize the flight of a simulated aircraft. The system, illustrated in Figure 1, consists of rat cortical neurons cultured on an MEA that are stimulated periodically to measure the weights from two different locations (stimulation sites) in the network. Proportional feedback as the result of errors in the aircraft’s attitude (pitch and roll) is computed using the current synaptic weights measured between neurons within the rat cortical network. These weights were modified during each evolution based on the flight trajectory information, measurement of weights, and proportional feedback, to optimize the aircraft’s stability. In other words, this living neuronal network essentially “learned” to act as an autopilot adjusting the aircrafts control surfaces to maintain straight and level flight.

    petrushka:

    By evolution I would accept “self” modification via trial and error learning without necessarily involving physical replication.

    But that’s learning. “Evolution”, especially on this site, means something else.

    Anyway, our main point of disagreement is over this statement of yours:

    The specific reason I am pessimistic about AI is that I think we do not know how to emulate chemistry.

    I still don’t see why you think it’s necessary to emulate chemistry.

  14. In the ‘stolen concept’ thread, Neil writes:

    I’ll add an off-topic comment. My disagreement with AI (see the AI thread), boils down to the problem that AI implements logic but fails to implement judgment.

    That seems to be quite a departure from what you’ve previously argued in this thread.

    In any case, you are still confusing the characteristics of the system with the characteristics of its components. From an earlier comment of mine:

    AI does not equate intelligence with logic. Logic gates implement logic functions, of course, but that does not mean that systems based on logic gates can only do logic.

    You made the same mistake in the OP:

    Interestingly Patrick’s message, where he asked for this thread, contained a picture of Spock from Star Trek. As a Star Trek character, Spock was known to be very logical and not at all emotional. That’s what I think you get with computation. However, as I see it, something like emotions are actually needed. They are what would give an artificial person some sense of direction.

    eigenstate explained your error:

    On the emotion, thing, there’s a problem of (inadvertent) equivocation on the word “logic” that comes up regularly in these discussions. Your Spock reference highlights the equivocation. In a colloquial sense, Spock is “logical” because he’s minimally emotional (although that’s a misconception about emotions as well, but I won’t bother with that here). But the most emotional “illogical” person you might compare Spock to is exactly as logical as Spock or anyone else in a computational sense of the term, and the computational sense of “logic” is what we are focusing on here, right.

    That is, emotion is computation in as thoroughgoing sense as mathematical figuring in one’s head. Emotional responses are driven by stimuli and interaction with the brain, nervous system and other parts of the body in rule-based, physical ways.

    As for emotion, so for judgment. Individual logic gates don’t exercise judgment, but that doesn’t mean that systems based on logic gates cannot exercise judgment.

  15. keiths: That seems to be quite a departure from what you’ve previously argued in this thread.

    It may seem that way to you, but not to me. I see judgment as depending on categorization, which I have previously mentioned.

    Individual logic gates don’t exercise judgment, but that doesn’t mean that systems based on logic gates cannot exercise judgment.

    I see logic as decision making based on form, and I see judgment as decision making based on content. When a system based on logic gates “exercises judgment”, it is usually because a human designer has built a formal structure to adequately represent the content. So that system built on logic gates is not really making a decision based on content. It is merely simulating that, with the aid of its human provided programming.

    This is really an earlier disagreement, where I took the sensing of environmental information to be central, while you insisted that the formal logic was central. I see the brain as a measurement and categorization engine, not as a logic engine.

Leave a Reply