I think a thread on this topic will be interesting. My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence. That sounds circular, and in one sense it is. In another sense it isn’t. It’s a way of saying that we don’t have to examine the internal workings of a system to decide that it’s intelligent. Behavior alone is sufficient to make that determination. Intelligence is as intelligence does.
You might ask how I can judge intelligence in a system if I haven’t defined what intelligence actually is. My answer is that we already judge intelligence in humans and animals without a precise definition, so why should it be any different for machines? There are lots of concepts for which we don’t have precise definitions, yet we’re able to discuss them coherently. They’re the “I know it when I see it” concepts. I regard intelligence as one of those. The boundaries might be fuzzy, but we’re able to confidently say that some activities require intelligence (inventing the calculus) and others don’t (breathing).
I know that some readers will disagree with my functionalist view of intelligence, and that’s good. It should make for an interesting discussion.
Corneel:
Yet they arrive at the correct responses anyway. That’s my point. They understand niceness, humor, distress, etc, cognitively, and that suffices, despite the fact that they can’t experience the associated emotions. I understand the skua’s delectation in eating bird vomit cognitively, and that suffices, despite the fact that I’ll never know how it feels to be a skua eating delicious vomit.
Intelligence is separable from emotion, and intelligence can be used to understand emotion cognitively even in the absence of sentience. Erik thinks my position is self-contradictory, but it isn’t.
Erik:
If you can’t explain why AI isn’t intelligent, why do you believe that AI isn’t intelligent? If the links you’ve been posting lead to arguments for why AI isn’t intelligent, why not state those arguments here in your own words?
Also, I don’t understand why you’re making this appeal to authority. You tried that with Yann LeCun, but then I showed you that LeCun agreed with me, not you. If you want to cite authorities, that’s fine, but make sure you understand their positions well enough to determine whether they agree with you. Then present their arguments here rather than expecting me to watch long videos that may or may not support your position.
This is clearly an emotionally charged topic for you. My impression is that you are pulling a colewd. “AIs aren’t intelligent” is to you as “Donald Trump isn’t dishonest” is to Bill: something you believe and cling to for emotional reasons, despite being unable to present arguments in its defense.
Erik,
This argument is logically valid:
The logic is airtight, yet you disagree with the conclusion. If the conclusion is wrong, then at least one of the premises must be wrong. Which is it? You’ve already agreed with #1. That leaves #2.
You believe that #2 is wrong and that AIs can’t write stories. They can only simulate story-writing. Why do you believe this?
That’s the crux of the entire debate. “Go watch these videos” doesn’t answer the question. “You’re at square zero” doesn’t answer the question. “You’re not an expert” doesn’t answer the question. “Intelligence and emotions aren’t separable” doesn’t answer the question.
If you want to defeat my argument, you need to show that AIs don’t actually write stories. Good luck to you, because AIs obviously produce stories, and I’ve presented some in this thread. Somehow that doesn’t count as story-writing. Why?
If you can’t show that AIs don’t write stories, then my argument is sound and the conclusion stands: AIs are intelligent.
Corneel:
They haven’t learned the patterns in the mere sense of storing templates that they fill in later when generating responses. Instead, they’ve discerned the syntactic and semantic relationships among words by observing zillions of usage examples.
The fact that it’s semantics and not just syntax makes all the difference. I’ll explain in detail elsewhere, but every word in an LLM’s vocabulary is a vector in a hyperdimensional mathematical space known as an “embedding space”. (AI seems to involve spaces, spaces, spaces everywhere. I’ve encountered six or seven spaces so far.) The vectors cluster together according to meaning. The vectors for cat, lion, tiger, leopard, panther, etc will be near to each other in embedding space but more distant from gorilla, which will be in a cluster with chimpanzee, monkey, orangutan, etc. There are many dimensions in embedding space (some 12,000 in GPT-3), so there are lots of ways in which vectors can be close to or distant from each other, allowing for lots of ways of expressing relationships.
The fact that the relationships are heavily semantic as well as syntactic explains many of the surprising capabilities of LLMs. I tested Claude’s ability to analogize at one point by prompting him with
He answered “angry (or furious)” and was able to explain why. There’s no way he could have done it purely syntactically. You have to know the meanings of the words, the concept of intensity, and how similar words rank in terms of intensity. Vectors in embedding space carry a lot of information.