Is AI really intelligent?

I think a thread on this topic will be interesting. My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence. That sounds circular, and in one sense it is. In another sense it isn’t. It’s a way of saying that we don’t have to examine the internal workings of a system to decide that it’s intelligent. Behavior alone is sufficient to make that determination. Intelligence is as intelligence does.

You might ask how I can judge intelligence in a system if I haven’t defined what intelligence actually is. My answer is that we already judge intelligence in humans and animals without a precise definition, so why should it be any different for machines? There are lots of concepts for which we don’t have precise definitions, yet we’re able to discuss them coherently. They’re the “I know it when I see it” concepts. I regard intelligence as one of those. The boundaries might be fuzzy, but we’re able to confidently say that some activities require intelligence (inventing the calculus) and others don’t (breathing).

I know that some readers will disagree with my functionalist view of intelligence, and that’s good. It should make for an interesting discussion.

57 thoughts on “Is AI really intelligent?

  1. Computers don’t do real math. They only simulate doing math. All the answers can be found in books.

    Therefore they aren’t useful.

  2. petrushka:

    Computers don’t do real math. They only simulate doing math. All the answers can be found in books.

    Therefore they aren’t useful.

    Exactly. His argument really seems to be something like that. When pressed, he would probably admit that computers are useful, but then he’s faced with the obvious question: Why are computers so useful if the arithmetic they do is fake? And what distinguishes fake arithmetic from the genuine arithmetic that humans presumably do?

  3. Current AIs have exposed a fatal flaw in the Turing Test.

    They can converse so eloquently on every subject that their competence betrays them. And then they flub a response that a five year old could handle.

    The things AI can’t do are the things that cats, dogs, monkeys and dolphins can do.

    If I were forced to make a prediction, I would predict that AI will acquire more humanlike capabilities over time. How much time? Less than anyone imagined. Science fiction said 300 years. I think less than thirty.

    Five years ago I thought self driving was extraordinary difficult. That was true, but it arrived anyway.

  4. I asked Claude to invent a language with grammar different from English, and then to translate his Tic-tac-toe paragraph into it, stating the vocabulary and rules he was using. This was a much harder challenge for him, and he screwed up in a few places: omitting a pluralizing suffix where one was needed, messing up the word order in a sentence, forgetting that adjectives always come after nouns in his language. I didn’t describe his errors to him — I just asked him to double-check his work, and he found all but one of the mistakes. (A similar approach is helpful when using AI to code. Always ask it to double-check.)

    ETA: I tried another couple of times in separate chats, and Claude did even worse.

    It’s an interesting set of failures. I think he was having trouble ignoring the “urge” to use English-like constructions and sticking to his invented rules. Even though the words were invented, and he had never seen them before, he had a clear sense of their function: noun, verb, adjective — and the English rules for dealing with those sometimes overpowered the rules he had invented.

    I won’t bore you with the translation, but to give you a flavor of the invented grammar, I asked Claude to do a literal, word-for-word translation back into English:

    Players two game play grid-on 3×3-in. Player first mark X uses, player second mark O uses. Players turns alternate, mark placing squares-in-empty. Player marks-their-three line-in—horizontal, vertical, or diagonal—achieving wins. Squares all marks without filled being, game draw-in ends.

    This has implications for the experiment I’m designing in which I’ll try to teach the AIs to code in a language that isn’t in their training data. Should be interesting.

  5. Erik:

    For the last time, AI *simulates* thinking. That’s what it’s made to do.

    I still haven’t heard you explain the difference between all these supposedly fake activities and their real counterparts, other than simply declaring that when a machine does them, they’re fake — they’re only simulated, not real. If your premise is “Machines can’t be intelligent, because at most they can only simulate intelligence”, then your conclusion will be that machines can’t be intelligent. That’s boring, because no matter how capable they become, even doing genius-level work, you’ll declare that they aren’t intelligent. Like everyone, you’re entitled to your own definitions, but that isn’t how most people use the word “intelligent”. For them, there are activities that machines could do (or already do) that would qualify them as intelligent.

    But back to your contention that all of these activities are only fake when a machine does them. If you ask an AI to write a story, it produces a story. It’s a real story, with characters, a plot, and a resolution. If you show it to someone without telling them that an AI produced it, they’ll describe it as a story. It’s a real story, but according to you, the AI is only simulating the process of writing. If so, why does a real story get produced? How can a fake story-writing process produce real stories? If the process is fake, why aren’t the stories fake?

    If computers produce real sums, why regard their arithmetic as fake? If AIs systematically work through and solve physics problems, why is that fake problem-solving? If it’s fake, why do real solutions come out the other end? If an AI produces a paragraph describing the rules of Tic-tac-toe without using the word “the”, it’s a real paragraph, not a fake one. If the process is fake, why did it produce a real paragraph that honors the specified constraint?

    Please stop avoiding the simulated-vs-real question. It’s central to the debate.

    The essence of its actual operations is regurgitation of pre-fed material.

    No, and I’ve given multiple examples of AIs generating original material.

    Based on available research, AI starts glitching whenever it is faced with a problem it has little or poor training in, no matter how simple the problem.

    Your plumber (unless they happen to be a computer geek) will start glitching if you ask them to write code in C++. They haven’t been trained in it. They’ll have trouble getting it to compile, much less produce the desired results. So what? What does that have to do with the question of their intelligence?

    Erik:

    You can make AI say things like, “I love you.”

    keiths:

    You can make dolls say that too. Not proof of actual emotions.

    Erik:

    Exactly. And also no proof of intelligence.

    Not so. Your plumber can fake being happy, but they can’t fake knowing how to program in C++ — not if you actually sit them down and ask them to do it.

    keiths:

    Emotions require sentience, but intelligence doesn’t. To be happy is to feel happy, but you don’t have to feel anything in order to be intelligent. You just need to be competent.

    Erik:

    False. Both emotions and intelligence require self-cognition. Machines do not have it.

    Supporting argument, please. Why is self-cognition necessary? If an AI unifies GR and QM, but doesn’t have self-awareness, why does that make it unintelligent? For that matter, emotions don’t depend on self-awareness either. Do you truly think an animal can’t be content, or angry, or amorous without thinking of itself as content, angry, or amorous?

    Plus, I gave two examples of Claude’s self-awareness above, in which he demonstrates meta-knowledge — knowledge about what he knows and doesn’t know.

    AI does not think when nobody makes it to. Like all machines, when given nothing to do, AI does nothing.

    Chatbots are deliberately designed not to go off and do things on their own, but that doesn’t mean they aren’t capable of it. Once a chatbot starts talking, it can go on indefinitely. The only reason it stops is because it’s trained to stop instead of blabbing on and on like Aunt Mildred.

    Earlier in the thread, I mentioned that people are putting AI characters into video games and letting them loose to explore and learn. Those AIs are self-motivated. They aren’t constantly waiting for someone to tell them what to do, unlike chatbots. You’re mistaking a deliberate design decision for an inherent limitation of AI.

    But humans think and act even when alone. It’s a key difference between dead stuff and living beings.

    See above. Nothing prevents AIs from thinking and acting alone, other than deliberate design decisions.

    I’m pretty sure that you do not acknowledge the concept of self-cognition at all – a general issue with physicalists.

    I don’t know where you got that idea. Who are these physicalists who think that self-cognition doesn’t exist? If you ask one of them “How are you feeling today?”, do they answer “I don’t know. Who is this ‘you’ of whom you speak?” I know a lot of physicalists, and I’ve never heard them say anything like that. And even that would demonstrate self-cognition, because the speaker is referring to their own state of knowledge: “I don’t know.”

    keiths:

    You’re assuming your conclusion:
    1. Machines can’t be intelligent.
    2. AIs are machines (or based on machines).
    3. Therefore AIs can’t be intelligent.

    It’s more like:

    1. All machines lack intellect/intelligence.
    2. AIs are machines.
    3. Therefore AIs lack intellect/intelligence.

    Same difference. You’re assuming your conclusion.

  6. A while back, I read of an experiment of trying to teach chimps to drive cars. Turned out the chimps were very skilled drivers, but failed to grasp certain essentials. The chips understood, for example, that green means go and red means stop – but if you put a brick wall in front of the car and turned the light green, the chips always drove the car into the wall.

    Self-driving algorithms suffer from similar lack of judgment. Human drivers (except teenagers) learn to assess their overall situation – all other nearby vehicles and what direction and how fast they are going, side streets and parking lots and whether anyone might drive out from one, pedestrians and what they’re doing (and their approximate ages, with knowledge of the judgment expected from pedestrians of varying ages), whether signs like stop signs have likely been improperly removed, some idea of mechanical failure like sticky gas pedal or flat tire, this list can become very long. Yet we all learn to develop awareness of all this. Self-driving algorithms struggle with both the awareness and the judgment applied to it. Perhaps most multi-vehicle accidents stem from someone doing something someone else failed to properly (or accurately) anticipate.

    I don’t think I’d be very good at writing a driving AI that could actually recognize that a nearby vehicle was being driven by someone age 16 or 90, and plan accordingly. As of today, I think we’re a long way from programming “driving intelligence”. Many edge cases are handled well enough, but only those cases that can be anticipated and modeled. Anticipating the unexpected, now, that’ll take a while longer.

  7. Flint,

    Your comment reminded me of something I read the other day about a proposal to add white to the standard red, yellow, green of traffic lights. I’m not clear on the fine details of how this would work, but the basic concept is this: white means “follow the car in front of you”. The idea is that all of the self-driving cars will communicate with each other in order to coordinate the best way to maintain traffic flow while preserving safety. Humans who are still driving their old clunkers will treat red, yellow and green as always, but if they see white, I guess that means that the car in front of them is self-driving and is effectively running interference for them, so that all they have to do is follow it.

    That’s all I know about it, from the single mini-article I read.

Leave a Reply