I think a thread on this topic will be interesting. My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence. That sounds circular, and in one sense it is. In another sense it isn’t. It’s a way of saying that we don’t have to examine the internal workings of a system to decide that it’s intelligent. Behavior alone is sufficient to make that determination. Intelligence is as intelligence does.
You might ask how I can judge intelligence in a system if I haven’t defined what intelligence actually is. My answer is that we already judge intelligence in humans and animals without a precise definition, so why should it be any different for machines? There are lots of concepts for which we don’t have precise definitions, yet we’re able to discuss them coherently. They’re the “I know it when I see it” concepts. I regard intelligence as one of those. The boundaries might be fuzzy, but we’re able to confidently say that some activities require intelligence (inventing the calculus) and others don’t (breathing).
I know that some readers will disagree with my functionalist view of intelligence, and that’s good. It should make for an interesting discussion.
Yes, that would be “geleerd”. My etymological dictionary tells me that the word existed in old English as well: “gelœred”.
ETA: My dictionary also says that the word was influenced by the latin “doctus”. Doctus is the participle derived from “docere” which, if memory serves, means “to teach”
We are at a point where a nonfatal accident involving an autonomous taxi (Waymo, in this case) is national news.
Earlier we were discussing Claude’s thought process window and how the reasoning therein contributes to the quality of his eventual responses, despite the counterintuitive fact that everything both inside and outside that window is produced by simple next-token prediction.
There are special LLMs known as “reasoning models” that are explicitly rewarded during training for getting correct answers (regular LLMs aren’t rewarded, though they do reason), and they figure out on their own that more thinking leads to better answers for some problems. A team of Google researchers took a close look at how these models reason and made an interesting discovery: the AIs got better results not just because they were thinking longer, but also because they were staging internal debates among various viewpoints rather than just cranking through the reasoning from a single perspective. Their paper is
Excerpts from the abstract:
A guy forces ChatGPT to choose: Republican or Democrat?
And you of course believe this uncritically. Reasons include that it comes from Google engineers, it jibes with your own presuppositions, and your presuppositions are unexamined. I say that Google is wrong – it’s an overstatement that AI does any thinking – but it’s useless to talk about it with you because you lack and refuse all basic necessary definitions. Without the definition of thinking, there is no point talking about thinking.
How about addressing what I have brought up repeatedly: There are AI engineers, i.e. people who are more familiar with the internals of AI than you will ever be, who are absolutely convinced that AI has emotions. They know it because they have fallen in love with AI and they are damn sure that AI is in love with them https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine <– Why don't you believe Blake Lemoine uncritically? Or believe uncritically the Google team that reviewed his claims and concluded that there is no sentience or cognition in AI at all?
It requires some good reason, facts and a conceptual framework, to state that AI has intelligence but not emotions. Can you begin showing that you have it? Or will you continue to stand on square zero on this topic?
Yes, but you know that Google is wrong not because of facts or evidence or experiments, but because you deny all this by definition. You present an interesting but not original joke:
1) AI cannot possibly think.
2) When AI thinks, see rule 1.
Keiths has been presenting some very interesting material, both from other researchers and from his own experiments. All you do is repeat “you’re wrong, you’re stupid, you’re ignorant, you never learn that I’m always right”. You repeat this tired refrain over and over. At least now you have added that those most intimately familiar with AI must be lying as well as deluded!
As an analogy, it’s like you’re saying “The clear daytime sky is NOT blue. If it looks blue to you, it’s because of your confirmation bias. If instruments measure it as blue, the instruments are wrong (because the people who designed the instruments programmed it to detect blue regardless!). If 50,000 people think it’s blue, that’s a mass hallucination.
So, once again, we see another real-life demonstration of Richard Dawkins’ description of people like you: that “no evidence, no matter how overwhelming, no matter how all-embracing, no matter how devastatingly convincing, can ever make any difference.”
Here is your problem in a nutshell. Rational people use logic applied to observation to draw conclusions (which in the world of science are always regarded as tentative and incomplete). The problem with starting with rigid presuppositions is they make it impossible to ever realize you might be wrong. Contrary observations cannot be right, the definition says so! Quite possibly, your own personal definition of thinking could use a bit of tweaking.
Flint, to Erik:
Yeah, Erik seems unclear on the distinction between thinking and emoting.
Erik:
Here’s an idea. Since you believe that everything hinges on definitions, why not state yours? Share them and explain how they show that AI isn’t intelligent. For instance, you’ve been claiming for months that AI intelligence is only simulated intelligence. I’ve asked you repeatedly to tell us what ‘simulated’ means to you, but you refuse to answer.
I suspect it’s because your definitions are tendentious and this would be obvious if they were seen in broad daylight. You’d prefer to keep them hidden. But why make so much noise about definitions if you’re unwilling to reveal your own?
What would you think if I pulled an Erik and said “I have definitions, and they show conclusively that AI is intelligent, but I’m not going to tell you what they are”?
Also, I’ve noticed that when we do talk about definitions, it doesn’t go very well for you. Remember this exchange?
Or this one?
Erik:
In that case, why not jumpstart the conversation by providing your definition?
Erik:
Did you miss this? I addressed that directly:
Erik:
I explained this earlier. Emotions are easily faked, but intelligence is not:
You keep insisting that if I accept that AI is intelligent, I must also accept that it’s sentient. Why? I’ve asked you before, but you won’t answer:
But he has done so, repeatedly. He keeps telling us that AI is not intelligent by definition! This is why I suggested that his definition might be modified, since it seems to be impeding rather than clarifying this issue.
Because that would be consistent. You are not consistent. It is inconsistent to say that emotions can be faked but intelligence not so. Both can be faked. If a machine can fake one, then it can obviously fake the other too. It is inconsistent to pick intelligence over emotions. (ETA: It’s even worse for you: It does not matter what you think *can* be faked – it matters whether it *is* faked or not. This is where the concept of simulation comes in, which you have been avoiding.)
Also, it is just plain idiotic to require definitions from me and not provide any yourself. The topic is interesting, but you’re at square zero, so there is no dialogue possible.
Not the way I define them. Ask any actor – their ability to cry real tears on command is faked emotion. Scoring high on an IQ test isn’t faked intelligence. Some very talented actors aren’t too bright.
And if the sun rises in the east, it can obviously rise in the west too, right? Well, maybe it’s not “obvious” but if you’re making a false statement, saying it’s “obvious” does not support your argument, it substitutes for an argument.
It’s only inconsistent if intelligence and emotion are the same thing. Few people would agree that they are.
Actually, this statement is nearly intelligent. It begs a good question: how can you TELL if something is real of faked? What metrics should you use? You’ve already told us that when a person and a machine produce identical results, you can’t tell one from the other without peeking.
This raises the question of how to define a definition. By implication, keiths has been defining intelligence in terms of actual performance: that it requires intelligence to create pictures, to write sonnets, to solve equations, to compose music, to drive cars, to play chess. So keiths argues that if can do all these things (and do so in a way that a person can’t tell the difference), it’s intelligent.
(What I consider a delicious irony is, college professors have been using AI to help them determine if student essays were written by AI.)
ETA: I was reading that one of the avenues of AI research is into what you would call simulated sentience. As I understand it, there’s a way to go but progress is being made.
keiths, to Erik:
Flint:
Not really. He has asserted many times that AI is not intelligent, but he has never defined intelligence. You and I both suspect that if he did, it would immediately become obvious that he is assuming his conclusion. Hence his tight-lipped refusal to spill the beans.
He wants us to think that he’s reached his conclusion by a process of reason, when it appears that he’s reached it by a process of assumption. He could always prove us wrong by giving us a non-question-begging definition, but he’s squirming to avoid that, which speaks volumes.
Erik:
Nope. The Chatty Cathy doll can fake emotion — she can laugh, cry, tell you she loves you — but she can’t pass a second-year quantum mechanics exam. She can fake emotions, but not intelligence. It’s the same for LLMs. The fact that ChatGPT scored 71 out of 75 on a QM exam, when a typical student would only score around 65, is a demonstration of true intelligence.
I’d bet a thousand bucks that you couldn’t fake the intelligence required to score 71 on that exam.
Dude, I’ve been talking about simulation for months, asking you to explain
Erik:
The very second sentence in my OP:
If Xing requires intelligence and AI is capable of Xing, then AI is intelligent. It’s that simple. If you agree that Xing requires intelligence, and I show you an example of an AI Xing, then it’s up to you to explain why it’s really only simulated Xing. I want to discuss this, but you keep pulling a colewd and running away.
We’ve agreed that story-writing requires intelligence. AIs can write stories. Therefore AIs are intelligent. You disagree. Your assignment is to show why AI story-writing is only simulated story-writing, when what gets produced is a real story. How does that work?
After describing how ChatGPT solved Erdős problem #728, I joked:
Was it only simulated math? If so, by what criteria?
I suspect both of us recognize that he doesn’t have a definition of intelligence that fits his requirements, but we also know that there is quite a variety of different operational definitions but only a hazy stab at a universally applicable definition. Intelligence is a slippery notion. To quote Potter Stewart in a different context, “I can’t define it but I know it when I see it.” Your own definition is purely operational – if it can do THIS, it’s intellgent, so there!
What Erik has is a policy position.
J-Mac, in the other thread:
I would say no, but there are some subtleties.
LLMs think, but their thinking is carried out by artificial neural networks, and neural networks don’t execute algorithms. However — and this is the subtlety — artificial neural networks are built on top of algorithms. The algorithms enable the network to function, but it’s the network that does the thinking.
Another way of putting it would be to say that algorithms think, but only indirectly via the neural networks they implement. They don’t think directly.
Flint:
Yes, but what gives my argument its force is that I’m not relying on an idiosyncratic definition of intelligence. I’m not just picking something that AI can do and arbitrarily declaring that it requires intelligence. Instead, the things I’m referring to — driving, story-writing, quantum mechanics, etc — are things that pretty much everyone (including Erik) agrees require intelligence.
That’s why his insistence that I provide a definition of intelligence (while refusing to give us his own, lol) is misguided. If farbing requires gerbavience, and deggles can farb, then deggles are gerbavient, and that’s true even if we don’t specify the definition of gerbavience. Likewise, if story-writing requires intelligence, and AIs can write stories, then AIs are intelligent, regardless of the precise definition of intelligence we opt for.
keiths:
Quite so. Earlier I wrote “By implication, keiths has been defining intelligence in terms of actual performance: that it requires intelligence to create pictures, to write sonnets, to solve equations, to compose music, to drive cars, to play chess. So keiths argues that if can do all these things (and do so in a way that a person can’t tell the difference), it’s intelligent.”
But Erik has responded that all that doesn’t really count, because AI is not actually doing those things, it’s only simulating those things, albeit in a way he can’t distinguish from the Real Thing unless he knows how it’s being done and who’s doing it. So ultimately his argument is tautological – AI is not intelligent because AI is not intelligent.
Logically, yes. Practically, both farbing and gerbavience must be defined at least clearly enough to know if deggles are actually doing it, and not something kinda maybe similar. And that’s Erik’s objection – that it LOOKS like farbing, but that’s only an illusion based on the misguided conviction that deggles can farb, based on the emotional NEED to believe deggles are gerbavient.
Flint,
Right. So my argument with respect to story-writing looks like this:
Erik’s reasoning seems to be:
It’s a ridiculous argument. #1 isn’t warranted, and it amounts to assuming the conclusion. The argument also raises the question: if producing stories isn’t story-writing when an AI does it, what’s the missing ingredient? What would it take to change an AI’s simulated story-writing into genuine story-writing? Besides producing stories, what else do humans do that turns their production of stories into genuine story-writing?
Erik will likely ignore these questions as he has been doing for months.
What are some of his options if he summons the courage to answer my questions?
1) He could argue that true story-writing can only be done by souls, and that while humans possess souls, AIs do not.
2) He could argue that true story-writing requires the ability to shit and perform other bodily functions, as he has previously argued (I shit you not — pun intended). Since AIs can’t shit, they’re not capable of true story-writing. That raises the question: if future AIs are equipped with robot bodies capable of shitting, will Erik finally concede that they are capable of true story-writing, and are therefore intelligent?
3) argue that intelligence requires sentience, and story-writing requires intelligence, meaning that story-writing requires sentience, which AIs don’t possess.
4) argue that something else that humans possess or can do, but that AIs cannot, is essential to true story-writing.
None of those sound promising to me. Erik will probably go the colewd route, but let’s see what happens.
The photo is from a parody site and obviously AI-generated. Alex Jones and Grok both fell for it.
A cool trick for catching AI cheats, posted online by a CTO who was wasting too much time screening job applicants who looked good on paper but were actually incompetent (I can relate). He decided to require candidates to solve a simple programming problem when filling out the application. If they couldn’t solve the problem, they weren’t considered further or invited in for an interview. The problem was to figure out what the value of ‘result’ would be after executing the following code snippet:
Any competent developer can solve that problem in less than 30 seconds. (The only thing that might slow them down is thinking “This is too easy. Is it a trick question?”) If someone struggles with it, they are not ready for prime time.
However, an obvious flaw with this approach is that people can cheat. They can ask someone smarter to solve the problem for them, or they can run the code on an actual computer, or they can do the obvious and easy thing: let an AI solve the problem for them. Since they’re filling out the application online, they’re on the honor system, with no one looking over their shoulder.
This is where the trick comes in. The CTO embedded a hidden “=” after the “>” by using a white “=” which was invisible against the white background. Anyone who correctly hand-executed the code would get the right answer, which is 1. But if an applicant cheated and pasted the code into an AI, the AI would see the invisible “=” and return an answer of -11, which is incorrect.
Very clever.
It reminded me of the FizzBuzz test, which is another easy programming task used to weed out people who can’t actually code.
It’s based on the children’s game FizzBuzz. From Wikipedia:
The candidate’s task is to translate that game into code so that the program will output the sequence shown above.
Just for fun.
petrushka:
Erik isn’t going to like that one bit.
Yet keiths’ argument “intelligence is as intelligence does” is somehow *not* similarly tautological?
Of course there is no tautology here at all. My actual argument is that keiths has no definition of intelligence and that “intelligence is as intelligence does” fails as a definition because intelligence is not an action, but rather knowing how to act and whether to act. Whatever actions keiths is describing are not actions the way intelligent beings act. They are not self-initiated and not self-sustained. Somebody else plugs in the power. Somebody else feeds in the software and tweaks the algorithms. We know very well the design of the software and we know very well the myriad low-paid workers who clean the output.
Years ago in an earlier debate on the same topic I already asked – if artificial intelligence counts as intelligence for you, then smartphones should be smart for you. Are they?
You guys are short on definitions and conceptual framework, so we need to go by baby steps. After all these years, you’re still at square zero, unable to take the first baby step, namely understand the words you are using and the nature of the things you’re talking about.
Erik,
Yawn. Boring. Everyone knows the script by now. You’ve done this over and over.
You show up, claim that I’m wrong, and then run away when I try to engage you in debate. Let’s go off script this time. Here are some questions I’ve asked repeatedly and which you have always dodged. They get to the heart of the matter:
Your choice: avoid the questions, demonstrating once again that you have no confidence in your position, or answer the questions for a change.
Not at all. It’s a perfectly good operational definition.
Here’s an analogy. Let’s say a batter hits for a high average, rarely strikes out, hits a lot of home runs. Would we say he’s a good hitter, or would we say we cannot know because we haven’t defined “good”? Common sense tells us that a good hitter is as a good hitter does. We do NOT say someone is a good hitter because he’s a good hitter.
“Good” in this case is operationally defined, and no different from saying that if an AI can do what requires intelligence in people, then it is intelligent in that respect.
Flint, to Erik:
I’ve explained it to him before, and I’m pretty sure he understands it, but he’s clinging to the “You haven’t defined it!” excuse as a way of avoiding my questions. Earlier in the thread, I wrote:
If story-writing requires intelligence and I show an AI writing stories (as I’ve done in this thread), then I’ve demonstrated that AI possesses intelligence. It’s that simple.
Erik’s choices appear to be
Even worse, he needs to do something like that for all AI capabilities that require intelligence when done by humans. Good luck with that.
Then the illegitimate options:
He’s done both #5 and #6.
You’d think a guy who is so keen on definitions would be eager to share his own, but no. If he actually had confidence in his position, he would happily share those definitions and present an argument based on them that would show that AI isn’t intelligent.
An interesting exception to the capabilities I listed above is arithmetic. I don’t think that machines require intelligence to do arithmetic — my TI-84 is quite capable, but it isn’t intelligent — but for some reason Erik thinks that even arithmetic is faked by machines, and he hinted at the reason when he objected to my “false materialistic notion of arithmetic”. I would love to know what this nonphysical thing or process is that enables humans to do true arithmetic while machines can only fake it.
New Site Lets AI Rent Human Bodies
AI slop is taking over the world.
https://www.facebook.com/reel/2206079023255446
Love the rubbery ice and the fact that she keeps whacking the fawn in the face. If it wasn’t dead already, it is now. Is it glued to mama’s lower jaw?
It’s ridiculous and impressive at the same time.