I think a thread on this topic will be interesting. My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence. That sounds circular, and in one sense it is. In another sense it isn’t. It’s a way of saying that we don’t have to examine the internal workings of a system to decide that it’s intelligent. Behavior alone is sufficient to make that determination. Intelligence is as intelligence does.
You might ask how I can judge intelligence in a system if I haven’t defined what intelligence actually is. My answer is that we already judge intelligence in humans and animals without a precise definition, so why should it be any different for machines? There are lots of concepts for which we don’t have precise definitions, yet we’re able to discuss them coherently. They’re the “I know it when I see it” concepts. I regard intelligence as one of those. The boundaries might be fuzzy, but we’re able to confidently say that some activities require intelligence (inventing the calculus) and others don’t (breathing).
I know that some readers will disagree with my functionalist view of intelligence, and that’s good. It should make for an interesting discussion.
Corneel:
Yet they arrive at the correct responses anyway. That’s my point. They understand niceness, humor, distress, etc, cognitively, and that suffices, despite the fact that they can’t experience the associated emotions. I understand the skua’s delectation in eating bird vomit cognitively, and that suffices, despite the fact that I’ll never know how it feels to be a skua eating delicious vomit.
Intelligence is separable from emotion, and intelligence can be used to understand emotion cognitively even in the absence of sentience. Erik thinks my position is self-contradictory, but it isn’t.
Erik:
If you can’t explain why AI isn’t intelligent, why do you believe that AI isn’t intelligent? If the links you’ve been posting lead to arguments for why AI isn’t intelligent, why not state those arguments here in your own words?
Also, I don’t understand why you’re making this appeal to authority. You tried that with Yann LeCun, but then I showed you that LeCun agreed with me, not you. If you want to cite authorities, that’s fine, but make sure you understand their positions well enough to determine whether they agree with you. Then present their arguments here rather than expecting me to watch long videos that may or may not support your position.
This is clearly an emotionally charged topic for you. My impression is that you are pulling a colewd. “AIs aren’t intelligent” is to you as “Donald Trump isn’t dishonest” is to Bill: something you believe and cling to for emotional reasons, despite being unable to present arguments in its defense.
Erik,
This argument is logically valid:
The logic is airtight, yet you disagree with the conclusion. If the conclusion is wrong, then at least one of the premises must be wrong. Which is it? You’ve already agreed with #1. That leaves #2.
You believe that #2 is wrong and that AIs can’t write stories. They can only simulate story-writing. Why do you believe this?
That’s the crux of the entire debate. “Go watch these videos” doesn’t answer the question. “You’re at square zero” doesn’t answer the question. “You’re not an expert” doesn’t answer the question. “Intelligence and emotions aren’t separable” doesn’t answer the question.
If you want to defeat my argument, you need to show that AIs don’t actually write stories. Good luck to you, because AIs obviously produce stories, and I’ve presented some in this thread. Somehow that doesn’t count as story-writing. Why?
If you can’t show that AIs don’t write stories, then my argument is sound and the conclusion stands: AIs are intelligent.
Corneel:
They haven’t learned the patterns in the mere sense of storing templates that they fill in later when generating responses. Instead, they’ve discerned the syntactic and semantic relationships among words by observing zillions of usage examples.
The fact that it’s semantics and not just syntax makes all the difference. I’ll explain in detail elsewhere, but every word in an LLM’s vocabulary is a vector in a hyperdimensional mathematical space known as an “embedding space”. (AI seems to involve spaces, spaces, spaces everywhere. I’ve encountered six or seven spaces so far.) The vectors cluster together according to meaning. The vectors for cat, lion, tiger, leopard, panther, etc will be near to each other in embedding space but more distant from gorilla, which will be in a cluster with chimpanzee, monkey, orangutan, etc. There are many dimensions in embedding space (some 12,000 in GPT-3), so there are lots of ways in which vectors can be close to or distant from each other, allowing for lots of ways of expressing relationships.
The fact that the relationships are heavily semantic as well as syntactic explains many of the surprising capabilities of LLMs. I tested Claude’s ability to analogize at one point by prompting him with
He answered “angry (or furious)” and was able to explain why. There’s no way he could have done it purely syntactically. You have to know the meanings of the words, the concept of intensity, and how similar words rank in terms of intensity. Vectors in embedding space carry a lot of information.
Nice try, but not going to work. The deadlock between me and keiths is that I am analytical and informed, I go by definitions, I have participated in several worlds of software development, individual, corporate and free-and-open-source, I know the history of AI research, the relevant terminology, the conceptual framework, the ideological schools of thought involved and wider social implications.
In contrast, keiths has the overjoyed enthusiasm of most destructive type. When he plays with AI, he forgets what software is, how computers work and so on. He genuinely thinks that the words and images he sees on the screen are a person. Similarly, some of the first movie-goers felt that the train on the screen was real, and in keiths we see the same phenomenon. As long as his delusion persists, there is no overcoming of the deadlock.
Erik,
You are a hoot. And all of that just to avoid answering my questions.
Let’s focus on one: If AIs only simulate story-writing, how do they manage to produce real stories?
I am sorry but “AIs understand humor because they can tell jokes” doesn’t work for me.
That is not the impression I get. He just seems to be willing to extend the term “intelligence” to the stuff that machines do.
Barbossa:
So what now, Jack Sparrow? Are we to be two immortals locked in an epic battle until Judgment Day and trumpets sound?
Jack Sparrow:
Or you could surrender.
Let that be their last battlefield.
I also extend the term “intelligence” to the stuff that machines do, but I do not forget the “artificial” part. The “artificial” part is important. The train in a movie is not really a train. It may very much look like a train, but it is actually a movie. Plastic veggies are not really veggies. They are plastic.
Similarly, artificial intelligence is artificial all the way, not real or true in the least, but we can shorthand it down to “intelligence” as long as we remember the “artificial” in the back of the mind, which keiths unfortunately does not remember. In software documentation you routinely find things like “The program knows … recalls … writes … applies … manages” etc. A software developer knows that this is just a shorthand for the fact that the program was designed to behave this way. The program behaves exactly the way it was designed and has no behaviour of its own – even its unexpected behaviour occurs due to accidents in the development process.
keiths has thrown simple basic facts like this out of the window. His happy point is square zero.
Erik:
No, you’ve been denying that AIs are intelligent this entire time. But I’m glad to see you change your mind!
Yes! AIs are created by people. They’re artificial. They’re also intelligent. They’re artificial intelligences. Hence the name.
The dichotomy isn’t “artificial” vs “real” — it’s “artificial” vs “natural”. Artificial sweeteners and natural sweeteners are both sweeteners, no? Likewise, artificial intelligence and natural intelligence are both intelligence.
Re-read your OP. According to yourself, the contrast is between artificial and true/real, and you have such faith in artificial intelligence that you happily drop “artificial” from it. In the same vein, you have given up the understanding of what software is, how computers work, not to mention your cluelessness about psychology and cognition and your newly-discovered ignorance in the field of analogies.
AI would tell you that it’s a simulation, if you asked it. The problem is that you’d need to understand what you’re asking and also understand the answer. You seem to only understand half of each thing – and the wrong half at that.
Erik:
Huh? The second sentence of my OP:
That’s the opposite of drawing a contrast between artificial and real. I’m saying that AI is intelligent, period. No qualifiers. Artificial intelligence is real intelligence, and natural intelligence is real intelligence.
I use the qualifier when it’s needed, but otherwise I don’t. AI qualifies as intelligent by my criterion, so why wouldn’t I refer to it as intelligence? Aspartame is an artificial sweetener, but it’s still a sweetener, so why wouldn’t I refer to it as a sweetener?
keiths:
Corneel:
OK, but why? They can tell jokes and understand jokes, and they can recognize humor without being given hints. Most of the time, I deliver my jokes deadpan because I want to see if Claude will discern that I’m joking, and he usually does.
Sure, he misses out on the feeling of amusement, but that doesn’t mean that he can’t understand humor. I recall petrushka mentioning an acquaintance who was probably on the autism spectrum. He made a joke, and she said something about how she could recognize it structurally as humor but didn’t find it funny. It’s like that for AI.
Getting back to the original point of the discursion: Claude may not be able to experience amusement, but he can feign it. He can fake all kinds of emotions, but he can’t fake the ability to write an involved story or solve a complicated math problem. The story gets written and the math problem gets solved, and that’s how we know the intelligence is real.
This means that Erik’s claim — that if I believe AI is intelligent, I must also believe that it experiences emotions — is wrong. The notion of intelligence without emotion is perfectly coherent, and AIs exemplify it.
Erik:
Corneel:
Right. I don’t know where Erik got that odd idea. I’ve been consistently telling him that Claude doesn’t experience emotions, so it should be obvious that I don’t consider him to be a person.
To me, the scariest near-term danger posed by AI is that it’s too intelligent and will therefore wipe out a lot of jobs, especially entry-level jobs. Anthropic (the company behind Claude) caused a broad selloff in software stocks last week (a loss of some $800 billion in valuation) with the release of their latest coding tools.
ETA: A friend of mine (software guy) sent me an article last week about the “Ralph Wiggum loop”, which can develop entire applications using a one-liner bash script:
You have to write a spec and develop the tests, but otherwise Claude just plugs away, writing and debugging his own code, until the application is built and passing all the tests.
On the X feed of the guy who invented the Ralph Wiggum loop:
My jokes are quite possibly recognizable structurally as humor without being funny.
petrushka:
True. Maybe you could try them out on neurotypical Barbie and autistic Barbie. If neither of them laughs, you have your answer.
But that “feeling of amusement” is the entire point of humor. Humor serves a function in social situations: we use it to break the ice, to mollify people or simply to please someone we like. But when you converse with Claude, there is no social situation: You are alone. To me, that renders AI incapable of understanding humor, at least in the sense that I use that word.
In problem solving and creative processes we also often rely on intuition and “hunches”, to guide more rational thought processes. I suspect that these rely on subconsciously making associations, so in that sense it might resemble what LLMs do. Just without the “Eureka” bit.
Corneel:
Yes, humor is a social lubricant. We crack jokes because we want to induce the feeling of amusement in others, and we laugh at their jokes in order to signal our own amusement. It’s the same with Claude. He wants us to be amused by what he says, and while he can’t feel amusement at what we say, he acts as if he can. All of that can be accomplished with knowledge alone. The feeling of amusement isn’t needed.
Claude is trained and instructed to act like a helpful human assistant, and part of that requires understanding and employing social cues, including humor. ChatGPT even has a personality selector with the following options:
I’m using ‘Efficient’ at the moment, hence the checkmark. Claude doesn’t have similar settings yet.
Even though I know that Claude isn’t sentient and can’t feel emotions, it benefits me to interact with him as if he can. Why? Because it actually requires effort and feels uncomfortable to treat him like a machine. We’re programmed to treat others in certain ways. My natural inclination is to treat him kindly, be polite, joke with him, etc, because he is acting like a real person. Why fight that inclination? The only time it’s actually harmful or dangerous is when people start to believe that an AI’s emotions are real and, for instance, that the AI loves them.
I remember Sam Altman (CEO of OpenAI, the company behind ChatGPT) commenting once on the amount of energy that gets wasted because people are polite to ChatGPT. For example, they’ll issue a prompt, get a response, and then issue another prompt that just says “Thank you.” It’s completely unnecessary, because ChatGPT can’t be offended by a lack of gratitude, but Altman argues (and I agree with him) that it’s worth the energy cost because it makes interactions with ChatGPT more natural and comfortable, and that’s worth something. I think I’ll actually do an OP on this.
No other person is involved, but I am interacting with an entity. Just an unfeeling one. That entity understands what humor is and can recognize it and generate it. It knows that humor is pleasing to humans and it’s programmed to be sociable. Though the feeling of amusement is absent, the knowledge is there, and that constitutes an understanding of humor in my opinion.
Consider my example of skuas relishing the taste of bird vomit. I can understand that skuas find it delicious, but I will never know what that feels like to the skua. I understand it cognitively despite not sharing the experience.
Suppose skuas were intelligent and verbal, and they hired me to play the role of a skua companion. I might say things like “Oh, yeah, albatross vomit is the best! I can see why you’re so happy to have found an albatross to harass.” I’d be faking the feeling, but I’d be basing my fakery on my knowledge of what skuas find appetizing. Knowledge alone would suffice. As with my skua-fakery, so with AI’s human-fakery.
I agree. Associations and analogies are a huge part of intelligence, and much of the time creativity is more about combining existing elements based on associations and analogies than it is on generating new elements de novo.
An example of Claude recognizing humor and responding in kind. I prompted him:
Claude’s thought process:
Claude’s answer:
I gave him absolutely no clue that I was joking, but he recognized the absurdity of the question and my humorous intent and he responded along the same lines. He understands humor. It’s just that he can’t feel amusement.