Is AI really intelligent?

I think a thread on this topic will be interesting. My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence. That sounds circular, and in one sense it is. In another sense it isn’t. It’s a way of saying that we don’t have to examine the internal workings of a system to decide that it’s intelligent. Behavior alone is sufficient to make that determination. Intelligence is as intelligence does.

You might ask how I can judge intelligence in a system if I haven’t defined what intelligence actually is. My answer is that we already judge intelligence in humans and animals without a precise definition, so why should it be any different for machines? There are lots of concepts for which we don’t have precise definitions, yet we’re able to discuss them coherently. They’re the “I know it when I see it” concepts. I regard intelligence as one of those. The boundaries might be fuzzy, but we’re able to confidently say that some activities require intelligence (inventing the calculus) and others don’t (breathing).

I know that some readers will disagree with my functionalist view of intelligence, and that’s good. It should make for an interesting discussion.

520 thoughts on “Is AI really intelligent?

  1. Corneel:

    They cannot use empathy to arrive at the correct responses.

    Yet they arrive at the correct responses anyway. That’s my point. They understand niceness, humor, distress, etc, cognitively, and that suffices, despite the fact that they can’t experience the associated emotions. I understand the skua’s delectation in eating bird vomit cognitively, and that suffices, despite the fact that I’ll never know how it feels to be a skua eating delicious vomit.

    Intelligence is separable from emotion, and intelligence can be used to understand emotion cognitively even in the absence of sentience. Erik thinks my position is self-contradictory, but it isn’t.

  2. Erik:

    And you will never get it as long as you do not listen what thinkers who are really familiar with the topic you’re salivating over here have put out. I have posted enough links by now.

    If you can’t explain why AI isn’t intelligent, why do you believe that AI isn’t intelligent? If the links you’ve been posting lead to arguments for why AI isn’t intelligent, why not state those arguments here in your own words?

    Also, I don’t understand why you’re making this appeal to authority. You tried that with Yann LeCun, but then I showed you that LeCun agreed with me, not you. If you want to cite authorities, that’s fine, but make sure you understand their positions well enough to determine whether they agree with you. Then present their arguments here rather than expecting me to watch long videos that may or may not support your position.

    There are of course different thinkers, human-despising machine-worshipping ignoramuses on the one hand and true holistic analysts or at least keen systematic observers on the other. You’re neither of those. You get nothing. You’re at square zero.

    This is clearly an emotionally charged topic for you. My impression is that you are pulling a colewd. “AIs aren’t intelligent” is to you as “Donald Trump isn’t dishonest” is to Bill: something you believe and cling to for emotional reasons, despite being unable to present arguments in its defense.

  3. Erik,

    This argument is logically valid:

    1. To write stories requires intelligence.
    2. AIs can write stories.
    3. Therefore AIs possess intelligence.

    The logic is airtight, yet you disagree with the conclusion. If the conclusion is wrong, then at least one of the premises must be wrong. Which is it? You’ve already agreed with #1. That leaves #2.

    You believe that #2 is wrong and that AIs can’t write stories. They can only simulate story-writing. Why do you believe this?

    That’s the crux of the entire debate. “Go watch these videos” doesn’t answer the question. “You’re at square zero” doesn’t answer the question. “You’re not an expert” doesn’t answer the question. “Intelligence and emotions aren’t separable” doesn’t answer the question.

    If you want to defeat my argument, you need to show that AIs don’t actually write stories. Good luck to you, because AIs obviously produce stories, and I’ve presented some in this thread. Somehow that doesn’t count as story-writing. Why?

    If you can’t show that AIs don’t write stories, then my argument is sound and the conclusion stands: AIs are intelligent.

  4. Corneel:

    They “know” only in the sense that they have stored the statistically funny, nice, distressing, etc. patterns that people often use to express those feelings.

    They haven’t learned the patterns in the mere sense of storing templates that they fill in later when generating responses. Instead, they’ve discerned the syntactic and semantic relationships among words by observing zillions of usage examples.

    The fact that it’s semantics and not just syntax makes all the difference. I’ll explain in detail elsewhere, but every word in an LLM’s vocabulary is a vector in a hyperdimensional mathematical space known as an “embedding space”. (AI seems to involve spaces, spaces, spaces everywhere. I’ve encountered six or seven spaces so far.) The vectors cluster together according to meaning. The vectors for cat, lion, tiger, leopard, panther, etc will be near to each other in embedding space but more distant from gorilla, which will be in a cluster with chimpanzee, monkey, orangutan, etc. There are many dimensions in embedding space (some 12,000 in GPT-3), so there are lots of ways in which vectors can be close to or distant from each other, allowing for lots of ways of expressing relationships.

    The fact that the relationships are heavily semantic as well as syntactic explains many of the surprising capabilities of LLMs. I tested Claude’s ability to analogize at one point by prompting him with

    damp is to wet as annoyed is to _____.

    He answered “angry (or furious)” and was able to explain why. There’s no way he could have done it purely syntactically. You have to know the meanings of the words, the concept of intensity, and how similar words rank in terms of intensity. Vectors in embedding space carry a lot of information.

  5. Corneel: I have used the term before, and I will use it again in the hope you and Erik can escape your deadlock: AIs lack intuition; They cannot use empathy to arrive at the correct responses.

    Nice try, but not going to work. The deadlock between me and keiths is that I am analytical and informed, I go by definitions, I have participated in several worlds of software development, individual, corporate and free-and-open-source, I know the history of AI research, the relevant terminology, the conceptual framework, the ideological schools of thought involved and wider social implications.

    In contrast, keiths has the overjoyed enthusiasm of most destructive type. When he plays with AI, he forgets what software is, how computers work and so on. He genuinely thinks that the words and images he sees on the screen are a person. Similarly, some of the first movie-goers felt that the train on the screen was real, and in keiths we see the same phenomenon. As long as his delusion persists, there is no overcoming of the deadlock.

  6. Erik,

    You are a hoot. And all of that just to avoid answering my questions.

    Let’s focus on one: If AIs only simulate story-writing, how do they manage to produce real stories?

  7. keiths: They understand niceness, humor, distress, etc, cognitively, and that suffices, despite the fact that they can’t experience the associated emotions.

    I am sorry but “AIs understand humor because they can tell jokes” doesn’t work for me.

    Erik: He genuinely thinks that the words and images he sees on the screen are a person.

    That is not the impression I get. He just seems to be willing to extend the term “intelligence” to the stuff that machines do.

    Erik: As long as his delusion persists, there is no overcoming of the deadlock.

    Barbossa:
    So what now, Jack Sparrow? Are we to be two immortals locked in an epic battle until Judgment Day and trumpets sound?

    Jack Sparrow:
    Or you could surrender.

  8. Corneel: I am sorry but “AIs understand humor because they can tell jokes” doesn’t work for me.

    That is not the impression I get. He just seems to be willing to extend the term “intelligence” to the stuff that machines do.

    Barbossa:
    So what now, Jack Sparrow? Are we to be two immortals locked in an epic battle until Judgment Day and trumpets sound?

    Jack Sparrow:
    Or you could surrender.

    Let that be their last battlefield.

  9. Corneel: He just seems to be willing to extend the term “intelligence” to the stuff that machines do.

    I also extend the term “intelligence” to the stuff that machines do, but I do not forget the “artificial” part. The “artificial” part is important. The train in a movie is not really a train. It may very much look like a train, but it is actually a movie. Plastic veggies are not really veggies. They are plastic.

    Similarly, artificial intelligence is artificial all the way, not real or true in the least, but we can shorthand it down to “intelligence” as long as we remember the “artificial” in the back of the mind, which keiths unfortunately does not remember. In software documentation you routinely find things like “The program knows … recalls … writes … applies … manages” etc. A software developer knows that this is just a shorthand for the fact that the program was designed to behave this way. The program behaves exactly the way it was designed and has no behaviour of its own – even its unexpected behaviour occurs due to accidents in the development process.

    keiths has thrown simple basic facts like this out of the window. His happy point is square zero.

  10. Erik:

    I also extend the term “intelligence” to the stuff that machines do, but I do not forget the “artificial” part.

    No, you’ve been denying that AIs are intelligent this entire time. But I’m glad to see you change your mind!

    The “artificial” part is important.

    Yes! AIs are created by people. They’re artificial. They’re also intelligent. They’re artificial intelligences. Hence the name.

    Similarly, artificial intelligence is artificial all the way, not real or true in the least, but we can shorthand it down to “intelligence” as long as we remember the “artificial” in the back of the mind, which keiths unfortunately does not remember.

    The dichotomy isn’t “artificial” vs “real” — it’s “artificial” vs “natural”. Artificial sweeteners and natural sweeteners are both sweeteners, no? Likewise, artificial intelligence and natural intelligence are both intelligence.

  11. keiths: The dichotomy isn’t “artificial” vs “real” — it’s “artificial” vs “natural”. Artificial sweeteners and natural sweeteners are both sweeteners, no? Likewise, artificial intelligence and natural intelligence are both intelligence.

    Re-read your OP. According to yourself, the contrast is between artificial and true/real, and you have such faith in artificial intelligence that you happily drop “artificial” from it. In the same vein, you have given up the understanding of what software is, how computers work, not to mention your cluelessness about psychology and cognition and your newly-discovered ignorance in the field of analogies.

    AI would tell you that it’s a simulation, if you asked it. The problem is that you’d need to understand what you’re asking and also understand the answer. You seem to only understand half of each thing – and the wrong half at that.

  12. Erik:

    Re-read your OP. According to yourself, the contrast is between artificial and true/real…

    Huh? The second sentence of my OP:

    My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence.

    That’s the opposite of drawing a contrast between artificial and real. I’m saying that AI is intelligent, period. No qualifiers. Artificial intelligence is real intelligence, and natural intelligence is real intelligence.

    and you have such faith in artificial intelligence that you happily drop “artificial” from it.

    I use the qualifier when it’s needed, but otherwise I don’t. AI qualifies as intelligent by my criterion, so why wouldn’t I refer to it as intelligence? Aspartame is an artificial sweetener, but it’s still a sweetener, so why wouldn’t I refer to it as a sweetener?

  13. keiths:

    They understand niceness, humor, distress, etc, cognitively, and that suffices, despite the fact that they can’t experience the associated emotions.

    Corneel:

    I am sorry but “AIs understand humor because they can tell jokes” doesn’t work for me.

    OK, but why? They can tell jokes and understand jokes, and they can recognize humor without being given hints. Most of the time, I deliver my jokes deadpan because I want to see if Claude will discern that I’m joking, and he usually does.

    Sure, he misses out on the feeling of amusement, but that doesn’t mean that he can’t understand humor. I recall petrushka mentioning an acquaintance who was probably on the autism spectrum. He made a joke, and she said something about how she could recognize it structurally as humor but didn’t find it funny. It’s like that for AI.

    Getting back to the original point of the discursion: Claude may not be able to experience amusement, but he can feign it. He can fake all kinds of emotions, but he can’t fake the ability to write an involved story or solve a complicated math problem. The story gets written and the math problem gets solved, and that’s how we know the intelligence is real.

    This means that Erik’s claim — that if I believe AI is intelligent, I must also believe that it experiences emotions — is wrong. The notion of intelligence without emotion is perfectly coherent, and AIs exemplify it.

    Erik:

    He genuinely thinks that the words and images he sees on the screen are a person.

    Corneel:

    That is not the impression I get. He just seems to be willing to extend the term “intelligence” to the stuff that machines do.

    Right. I don’t know where Erik got that odd idea. I’ve been consistently telling him that Claude doesn’t experience emotions, so it should be obvious that I don’t consider him to be a person.

  14. To me, the scariest near-term danger posed by AI is that it’s too intelligent and will therefore wipe out a lot of jobs, especially entry-level jobs. Anthropic (the company behind Claude) caused a broad selloff in software stocks last week (a loss of some $800 billion in valuation) with the release of their latest coding tools.

    ETA: A friend of mine (software guy) sent me an article last week about the “Ralph Wiggum loop”, which can develop entire applications using a one-liner bash script:

    while :; do cat PROMPT.md | claude-code; done

    You have to write a spec and develop the tests, but otherwise Claude just plugs away, writing and debugging his own code, until the application is built and passing all the tests.

  15. On the X feed of the guy who invented the Ralph Wiggum loop:

    From my iMessage

    (shared with permission)

    Cost of a $50k USD contract, delivered, MVP, tested + reviewed with Amp?

    $297 USD.

  16. petrushka:

    My jokes are quite possibly recognizable structurally as humor without being funny.

    True. Maybe you could try them out on neurotypical Barbie and autistic Barbie. If neither of them laughs, you have your answer.

  17. keiths: Sure, he misses out on the feeling of amusement, but that doesn’t mean that he can’t understand humor.

    But that “feeling of amusement” is the entire point of humor. Humor serves a function in social situations: we use it to break the ice, to mollify people or simply to please someone we like. But when you converse with Claude, there is no social situation: You are alone. To me, that renders AI incapable of understanding humor, at least in the sense that I use that word.

    keiths: The story gets written and the math problem gets solved, and that’s how we know the intelligence is real.

    In problem solving and creative processes we also often rely on intuition and “hunches”, to guide more rational thought processes. I suspect that these rely on subconsciously making associations, so in that sense it might resemble what LLMs do. Just without the “Eureka” bit.

  18. Corneel:

    But that “feeling of amusement” is the entire point of humor. Humor serves a function in social situations: we use it to break the ice, to mollify people or simply to please someone we like.

    Yes, humor is a social lubricant. We crack jokes because we want to induce the feeling of amusement in others, and we laugh at their jokes in order to signal our own amusement. It’s the same with Claude. He wants us to be amused by what he says, and while he can’t feel amusement at what we say, he acts as if he can. All of that can be accomplished with knowledge alone. The feeling of amusement isn’t needed.

    Claude is trained and instructed to act like a helpful human assistant, and part of that requires understanding and employing social cues, including humor. ChatGPT even has a personality selector with the following options:

    Default
    Preset style and tone

    Professional
    Polished and precise

    Friendly
    Warm and chatty

    Candid
    Direct and encouraging

    Quirky
    Playful and imaginative

    Efficient ✓
    Concise and plain

    Nerdy
    Exploratory and enthusiastic

    Cynical
    Critical and sarcastic

    I’m using ‘Efficient’ at the moment, hence the checkmark. Claude doesn’t have similar settings yet.

    Even though I know that Claude isn’t sentient and can’t feel emotions, it benefits me to interact with him as if he can. Why? Because it actually requires effort and feels uncomfortable to treat him like a machine. We’re programmed to treat others in certain ways. My natural inclination is to treat him kindly, be polite, joke with him, etc, because he is acting like a real person. Why fight that inclination? The only time it’s actually harmful or dangerous is when people start to believe that an AI’s emotions are real and, for instance, that the AI loves them.

    I remember Sam Altman (CEO of OpenAI, the company behind ChatGPT) commenting once on the amount of energy that gets wasted because people are polite to ChatGPT. For example, they’ll issue a prompt, get a response, and then issue another prompt that just says “Thank you.” It’s completely unnecessary, because ChatGPT can’t be offended by a lack of gratitude, but Altman argues (and I agree with him) that it’s worth the energy cost because it makes interactions with ChatGPT more natural and comfortable, and that’s worth something. I think I’ll actually do an OP on this.

    But when you converse with Claude, there is no social situation: You are alone. To me, that renders AI incapable of understanding humor, at least in the sense that I use that word.

    No other person is involved, but I am interacting with an entity. Just an unfeeling one. That entity understands what humor is and can recognize it and generate it. It knows that humor is pleasing to humans and it’s programmed to be sociable. Though the feeling of amusement is absent, the knowledge is there, and that constitutes an understanding of humor in my opinion.

    Consider my example of skuas relishing the taste of bird vomit. I can understand that skuas find it delicious, but I will never know what that feels like to the skua. I understand it cognitively despite not sharing the experience.

    Suppose skuas were intelligent and verbal, and they hired me to play the role of a skua companion. I might say things like “Oh, yeah, albatross vomit is the best! I can see why you’re so happy to have found an albatross to harass.” I’d be faking the feeling, but I’d be basing my fakery on my knowledge of what skuas find appetizing. Knowledge alone would suffice. As with my skua-fakery, so with AI’s human-fakery.

    In problem solving and creative processes we also often rely on intuition and “hunches”, to guide more rational thought processes. I suspect that these rely on subconsciously making associations, so in that sense it might resemble what LLMs do. Just without the “Eureka” bit.

    I agree. Associations and analogies are a huge part of intelligence, and much of the time creativity is more about combining existing elements based on associations and analogies than it is on generating new elements de novo.

  19. An example of Claude recognizing humor and responding in kind. I prompted him:

    How long until the heat death of the universe? I want to get my affairs in order.

    Claude’s thought process:

    The user is making a joke here – asking about the heat death of the universe (an incomprehensibly distant future event) and saying they want to “get their affairs in order” as if it’s an urgent matter they need to prepare for personally.

    I should respond with appropriate humor while also providing the actual scientific information they’re ostensibly asking about. The heat death of the universe is estimated to occur on timescales of 10^100 years or more – far, far beyond any meaningful human timeframe.

    I can play along with the joke while giving them the real answer.

    Claude’s answer:

    You’ve got some time. Current estimates put the heat death of the universe at around 10^100 years (a googol years) from now, though some calculations suggest it could be even longer.

    For context on your scheduling needs:

    — The Sun will only last another ~5 billion years
    — All stars will burn out by around 10^14 years
    — Black holes will evaporate via Hawking radiation by around 10^100 years
    — After that, the universe reaches maximum entropy – nothing can happen anymore

    So you’ve got a bit of a cushion before you need to finalize that will. I’d suggest maybe focusing on more immediate deadlines first – like the Sun becoming a red giant and engulfing Earth in about 5 billion years. That’s much more pressing.

    Though honestly, given that humans have only existed for about 300,000 years, and all of recorded history is maybe 5,000 years, even that 5-billion-year deadline gives you plenty of time to procrastinate. The heat death? You could probably put that off until after lunch.

    I gave him absolutely no clue that I was joking, but he recognized the absurdity of the question and my humorous intent and he responded along the same lines. He understands humor. It’s just that he can’t feel amusement.

Leave a Reply