I think a thread on this topic will be interesting. My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence. That sounds circular, and in one sense it is. In another sense it isn’t. It’s a way of saying that we don’t have to examine the internal workings of a system to decide that it’s intelligent. Behavior alone is sufficient to make that determination. Intelligence is as intelligence does.
You might ask how I can judge intelligence in a system if I haven’t defined what intelligence actually is. My answer is that we already judge intelligence in humans and animals without a precise definition, so why should it be any different for machines? There are lots of concepts for which we don’t have precise definitions, yet we’re able to discuss them coherently. They’re the “I know it when I see it” concepts. I regard intelligence as one of those. The boundaries might be fuzzy, but we’re able to confidently say that some activities require intelligence (inventing the calculus) and others don’t (breathing).
I know that some readers will disagree with my functionalist view of intelligence, and that’s good. It should make for an interesting discussion.
I think you may have a misunderstanding about how the cars are trained. Individual cars do not learn (although they have user chosen options that may be invoked after encountered situation).
But very car records what its cameras see, and if they encounter an unexpected event, or have an accident or near miss, the recordings become part of the training data for the whole fleet.
Minor example: a recent version of FSD overreacted to leaves blowing across the road. This was reported by a number of early adopters, and was updated for everyone within a couple of weeks. Everyone having the same version of the software has the same level of experience and the same abilities.
If you fail to watch a recent video made by an actual owner in a complex scenario involving heavy traffic, bicycles, and pedestrians, you really can’t fathom how much progress has been made in the last six months.
The cars are very unlikely to have an accident, because their highest priority is avoiding hitting things and people.
But rules of road can be very complex, particularly speed limits. For example, a sigh may say “End 45 MPH”, without immediately specifying the current limit.
School zones have speed limits that are effective only during certain periods.
Signs get covered by foliage.
I just did some driving through upstate New York. My car is not self driving, but it does display the current speed limit. It was wrong for most of the trip. It is very accurate in Connecticut.
GPS routing is pretty good for choosing roads and for telling you where to turn, but terrible for telling you that you have arrived. I’ve had three instances this week where it was off by several hundred feet at the destination.
I doubt it. Training neural networks is expensive and requires a lot of compute power, so at least for now I don’t think it’s something that cars can do on their own. I do know that they’re constantly recording data (via the automotive equivalent of a flight data recorder) so that after an accident, investigators can download the data and figure out what went wrong, and engineers can modify the training dataset or make other changes to avoid the problem in the future.
Yes, people can, and that’s one of the many differences between human driving and current AI driving. It doesn’t help Erik’s case, though, because he doesn’t need evidence that human drivers can do things that AI can’t. That’s obvious. He needs evidence that the AIs don’t actually drive. Driving requires intelligence, so if he acknowledges that self-driving cars actually drive, he’s conceding that AI is intelligent. (He actually did acknowledge that Waymos drive, but he had to walk it back when I pointed out the implications.)
I think he’s SOL on the driving. The driving that AIs do matches the dictionary definition: they “operate and control the direction and speed of a motor vehicle.” It ain’t simulated. Case closed.
He’s in a difficult position. He needs to prove that nothing AIs do requires genuine intelligence. Hence his “it’s all simulated” approach. But if even one such AI capability isn’t simulated, it’s game over. We’re already there with driving. Ditto for story-writing and analogizing and solving physics problems.
When a forklift lifts a pallet, it’s genuine lifting. When an AI writes a story, it’s genuine story-writing. When a self-driving car drives, it’s genuine driving. The latter two require intelligence. AI is intelligent.
The goalposts have been moved. Ordinary intelligence has been achieved. Extraordinary intelligence has not yet been achieved.
AIs can take IQ tests and score in the range of 120. Above average, but not yet PhD level. So we shouldn’t be surprised that they glitch out on puzzles.
Yesterday, I asked my browser what to do with an Amazon package delivered to me by mistake. Before discussing its response, I will point out that the mistake was made by a human driver, and the level of complexity was low.
The AI read the request wrong and assumed my package was delivered to someone else. It persisted in this.
It turns out to be a trick question. Amazon has no procedure for this situation. The correct recipient was half a mile away, so I drove the package to their house. The law says I can simply keep it. Amazon will, no doubt, realize the package is lost and replace it. The poor browser AI has no consensus on which to base a response.
But it was unable to reason about this and provide an acknowledgment of the dilemma. One strike against high level intelligence.
Not exactly the way I read him. Learning, by direct experience or by observation, really ought to be a keystone of intelligence. In theory, our educational institutions aren’t training grounds so much as instilling an ability to think and learn, so we can accumulate a body of experiences from which to extrapolate and develop judgment.
I think what Erik is trying to do is to collect a set of mental exercises that humans can do and AIs can’t, to exceed the boundaries implicit in training, memorization, etc. As far as I can tell, the current approach to greater AI intelligence is to expand the scope of the training, which is effective but limited. In a nutshell, AIs follow the trail, but do not blaze them.
Robots are learning by direct experience and by observing. One could say they are not equal to people, but what does that mean? It takes decades for a human to achieve adulthood. Some never do.
Flint:
Here’s a thought experiment to consider: imagine that a world-renowned radiologist is stricken with a neurological disorder that leaves them unable to learn. They can’t improve their skill set, but they can still evaluate X-rays, PET scans and MRIs with the best of them. Would you argue that they are no longer intelligent since they lack the ability to learn? That seems like a stretch to me.
Second, AIs do learn by experience and observation. That’s what training is. The AI experiences all of its training data, and the observations it makes during that process change the way it responds. It learns.
Even if we suppose that learning “in the field”, and not just in the lab, is a requirement for true intelligence, there are AIs that qualify. Earlier in the thread I described how AIs are teaching themselves to play video games through trial and error. They explore the game, try various techniques and strategies, and over time they learn to play well, exploiting the things that gain them points and avoiding the things that cause them to lose points or get killed. That’s direct learning through experience and observation.
There’s no theoretical reason why that kind of learning can’t be employed in the real world. A self-driving car could learn from experience and observation, but there are pragmatic reasons not to implement this. For one, it might not make sense costwise to give a self-driving car that ability. For another, there’s a regulatory issue. Manufacturers have to prove that their cars are safe before the government will allow them on the road, and they provide that proof via testing. Any given version of the software gets put through the wringer before being unleashed on the public. That testing remains valid if the software and neural network are frozen, but if you allow the car to learn, you run the risk of it learning new behaviors that make it less safe.
Here’s a contrived example, but it makes the point. Suppose a car somehow learns through observation that people are getting to a particular destination faster by driving the wrong way on a one-way street under certain conditions. You (and the government) don’t want it to start using that strategy, obviously, and by keeping it from learning, you can prevent that. There can be practical reasons for denying it the capability to learn even if it’s technologically and economically feasible.
Eventually AIs will be dependable enough in their learning that we won’t have to worry about this sort of thing, but for now, it’s actually a good thing that they don’t learn while in the field.
Education helps, but if it were a prerequisite for learning, humans would have died out a long time ago. People are wired to learn and they will do so whether or not they attend school. Schools don’t teach us how to learn; they teach us how to learn better.
To me, it appears that his goal is to maintain human exceptionalism. He wants at least some human capabilities to be forever out of the reach of machines, and though he’s coy about it, I sense that it’s for religious and/or philosophical reasons. From time to time he mentions that he thinks there’s something nonphysical about human reasoning, and whatever that nonphysical entity or process (a soul?) is, he seems to want it to be responsible for something that machines will never be able to do.
I wish he’d be more forthright about it. I’ve thought a lot about souls and why they don’t exist (including doing multiple OPs on the topic here), and it would make for an interesting discussion: if they exist, what are souls capable of that merely physical entities cannot do? How can we tell whether they exist? Can we show that they don’t exist, as I maintain?
Erik has even argued that something as basic as human arithmetic is nonphysical to an extent. Hence his insistence that computer arithmetic is only simulated arithmetic. The machine is simulating whatever the nonphysical soul, or process, or thingamabob is doing in humans.
How about spelling it out for us, Erik? What precisely do you think this nonphysical soul/thingamabob/process is doing that mere matter cannot?
They are definitely capable of blazing the trail, and they are doing so more and more. A classic example is an AI that learned to play the game Go, which is far more difficult than chess, so well that it defeated the world champion and invented new moves that human players had never before seen.
I cannot easily dismiss the existence of soul.
But I think it is physical and is embodied in brains. I don’t know if it is practical or economically feasible to build artificial brains that replicate all human brain activities.
I do not have a compelling case, but I have reasons. I think AI training mimics experience and education, but does not replicate evolutionary history. There is a lot of hard wiring in brains, analogous to infrastructure. Evolutionary history is the source of consciousness. Attention is the result of selection for survival.
There is a reason for the term neocortex. It makes primates able to foresee future events and plan, but it is not the seat of consciousness. That would be in the old brain. Something not implemented in AI.
Certainly I’d say their intelligence is diminished.
Your position is that AI is at least partially intelligent, with perhaps considerable room for improvement. I see this as a quantitative view, probably with AI on track to reach the singularity point where they have the capability to make themselves more intelligent iteratively, designing both hardware and software in the process.
And I see Erik as arguing for a qualitative difference, that AI intelligence is sufficiently different in kind from human intelligence (despite an impressive ability to mimic human intelligence) that it can’t ever qualify as intelligent as Erik defines the term. So maybe someday AI will be able to outperform people at everything people do, but we would need to coin a different term to describe that sort of general ability, since “intelligent” remains reserved for living creatures.
Various science fiction authors have addressed this difference, and their “digital entities” have full self awareness, fear of “death” (loss of continuity, I guess), and distinct individual personalities. I don’t know where Erik’s definition might fit in such a world.
More on the subject of AIs learning to play video games.
A case study from 2020:
Agent57: Outperforming the human Atari benchmark
What’s striking is that the AI starts out with zero knowledge of each game. It can see the screen (in the form of raw pixel values) and it can see the score. It has certain controls available to it (joystick inputs, firing buttons, etc), but it doesn’t know what they do. It starts from scratch. Everything it learns about the game, it learns on its own. Yet after practicing for a while, it can outperform a typical human.
A video explaining the research:
DeepMind Made A Superhuman AI For 57 Atari Games!
petrushka:
I’m sympathetic to the idea of a physical soul, because all of the things that have traditionally been ascribed to the soul appear to be ultimately physical and brain-based. However, I shy away from using the word ‘soul’ in that way because most people think of souls the standard way, as nonphysical entities that animate our bodies. ‘Physical soul’ is an oxymoron if you define ‘soul’ that way.
Believing in the existence of a soul, but a physical one, strikes me as analogous to calling oneself a theist but then defining God as the universe itself, not as a separate entity. It’s philosophically interesting but confusing if you’re trying to communicate with people who use the terms ‘soul’ and ‘God’ in the standard ways.
That might be a goal for research purposes, but a lot of what brains do is specific to humans (eg controlling appetite in response to hormonal signals). No practical purpose in replicating functions like that.
It’s true that humans arrived at our capabilities through evolution, but that doesn’t mean that evoution is the only way of getting there. Without pre-existing designers (sorry, ID folks), the universe had nothing but evolution as a means for developing intelligence, but that limitation no longer applies. We are the designers now and we can leapfrog over evolution.
Also, I would argue that AI development can be thought of as a form of sped-up evolution. It’s just that the mutations are chosen by humans, selective pressure is applied by humans, and replication is controlled by humans (for now).
Agreed, but whatever it is in the old brain that enables consciousness, I see no reason to think that it can’t be achieved (or won’t emerge) in AI.
Flint:
Their capabilities are diminished, but they’re still intelligent. If a skilled radiologist can lose the ability to learn but nevertheless remain intelligent, then I would argue that an AI can be intelligent even if learning ceases once its training is finished.
Yes, except that I would remove the qualifier “partially”. I think AI is intelligent, period, just like humans, octopuses, and border collies (Chaser the Dog Shows Off Her Smarts to Neil deGrasse Tyson) are intelligent.
Agreed. And we’re closer to that point than I ever thought we’d be within my lifetime.
He has yet to identify any such difference that doesn’t do violence to our notion of what intelligence is. The self-driving issue alone illustrates the corner he’s backed himself into. He agrees that driving requires intelligence. He knows that driving is defined as “operating and controlling the direction and speed of a motor vehicle”, as I pointed out above. He acknowledges that AIs can do precisely that. So now he’s trying unsuccessfully to redefine the word ‘driving’ in order to exclude AI.
It’s goofy and pointless. Operating a motor vehicle is an intelligent activity regardless of whether you call it ‘driving’ or not. And of course most people will reject Erik’s redefinition and continue using the word the way they always have.
Which would be as silly as coining a new word for what forklifts do, reserving the word “lifting” for living creatures.
Well, at one point he was arguing that the ability to take a dump was a prerequisite for intelligence, so I have faith in his ability to define machine intelligence out of existence no matter the circumstances. It’s just that it won’t be convincing to anyone who doesn’t share his idiosyncratic definition (and his determination to deny machine intelligence and maintain human exceptionalism).
Ganesh is my favorite Hindu deity, so one of my colleagues brought a Ganesh murti back from India for me. He’s been sitting on my computer ever since. (Ganesh, not the colleague). I decided to animate him yesterday. First I asked him to juggle the dumpling, but he just spazzed out:
Ganesh spazzes out
Then I asked him to move it with his trunk. Interesting that something was dripping from it — did the model decide that those things are served with syrup?
Ganesh’s trunk dexterity
At one point he spontaneously started shooting lightning bolts out of his hand, unprompted, which is how I know he’s real. I am now a Hindu:
Ganesh does his Zeus impression
If we can posit that Erik is intelligent himself, simply dismissing his viewpoint as wrong seems too black and white, especially when we’re more or less in agreement that the notion of “intelligence” is hazy around the edges. I personally categorize the term intelligent with terms like beautiful, or obscene. Things we know it when we see it, while recognizing that others can disagree. Hell, reading SCOTUS decisions and dissents lead one to conclude that “constitutional” is one of those undefinable terms.
But don’t get me wrong, I can see that Erik is at a loss for any definition of intelligence that both describes people and excludes AI. He’s admitted that he himself can’t tell the difference unless he knows the author. I wonder how he would react to a “computer assisted” car, where the human does most of the controlling but the AI does things like parallel parking or accident avoidance. If a human chess player consults a chess program for advice, who is really playing chess? Would it matter how many moves were selected by the human and how many were decided by the computer? If he didn’t peek, he admits he could not know who was playing.
If I had a Ganesh, I’d name him Baba. Or her.
I’d argue that brain wiring done by evolution and by learning are most nearly analogous to firmware vs software. With important differences.
A hundred fifty years of research and debate have not really untangled nature vs nurture. So it may take a while with AI.
Asimov’s Bicentennial Man worked for 200 years to earn status as a human citizen.
That’s fiction, but the real world arguments are in the air.
Here’s the conundrum: an AI that lacks odor and taste, with preferences, and lacks the other sensual drives selected by evolution, will not share a common basis for decision making, and will be forever distrusted for certain kinds of political roles.
A possibly controversial analogy: we tend to be suspicious of anyone who values the afterlife more than this life.
I just got Grok to calculate something (I’m lazy!), but noticed an error in the middle of its workings-out. I simply said “39.6÷28 isn’t 14” and it understood immediately what I was referring to, said it had made an arithmetic error and corrected the whole thing. Not intelligence, but uncanny.
Almost human. First the error, then the recover. Both are humanlike.
petrushka:
And if you had a Rama murti, you could name him Pano.
And leave out hardware? Evolution hasn’t just rewired the existing hardware — it’s added new modules. The neocortex, for one, as you mentioned.
The sensory stuff I don’t see as a problem. AI will ultimately be equipped with all of the senses that humans have and then some. The range will be superhuman: subsonic to ultrasonic, infrared to ultraviolet.
The tricky part is motivation: how to keep AI’s priorities aligned with ours. And even if an AI faithfully serves one group of people, it might be inimical to others. Think of the scary goals that a North Korean or jihadi AI might have.
Motivation is a strange thing. When we try to think about free will, we can assert we choose among options, but what we can’t do is choose our motives. We can’t choose to prefer bitter over sweet. Even if we attempt to overcome nature by eating bitter, we are responding to some other motive.
And, I have read, the old brain decides before we become aware of the decision.
Perhaps attempting to emulate these structures will illuminate some ancient philosophical problems. Or maybe drive them deeper.
Flint:
Erik is intelligent, but intelligent people can be wrong and I think this is a clear-cut case.
Categories can be fuzzy around the edges, but that doesn’t mean that there aren’t things that are definitely inside or outside. Crimson definitely falls in the category of ‘red’, but navy doesn’t. Debbie Does Dallas is definitely obscene, but the third Brandenburg Concerto isn’t. I’ve been intending to do an OP on this because discussions here and elsewhere sometimes get derailed by people insisting on precise definitions in cases where fuzzy ones suffice.
‘Constitutional’ is another of those terms that while fuzzy around the edges definitely includes some things and excludes others. Freedom of the press is definitely constitutional, but Trump running for a third term is not.
He wavers on that a bit. At one point he was arguing that there truly was a detectable difference between real stories and “simulated stories”:
I replied:
I don’t recall him saying anything about stories after that. I wish he’d address it, because it gets at the heart of the issue. Why attribute intelligence to a person who writes a story when you’d deny intelligence to an AI which had written the very same story? It makes no sense.
There’s a school of thought that Hollywood doesn’t create anything new, but just Mad-libs stories from bits of old movies.
petrushka:
If it were that easy, everyone would be a successful screenwriter. It takes intelligence to write a good film script.
Creativity isn’t often about creating new ideas de novo, but rather about combining existing ones in novel ways. Charlie Kaufman’s movie The Orchid Thief springs to mind.
Ran across this interesting video:
The Real Reason Bees Refuse to Fly After Dark
keiths:
Claude:
keiths:
Claude:
I think he was unsure whether I was being serious and decided to err on the side of caution, giving me a mostly serious response but also throwing in a joking reference to “The Great Bee-Dropping Conspiracy of 2025”. And sucking up to me by complimenting my critical thinking skills, lol.
keiths:
Claude:
ETA: Speaking of which, I was so happy when USB-C came out and I could stop worrying about orientation. If you add up the time that people worldwide spend each year misplugging their USB-A connectors, I’ll bet it’s a staggering number.
Can confirm the USB conspiracy.
I’ve had it take more than two tries. Can the percentage be less than zero?
What does Claude say about three prong AC plugs when you can’t see the socket?
On a more serious note:
It appears that implementing Asimov’s laws of robotics is difficult.
Has there been a popular mystery novel or movie involving murder by prompt?
petrushka:
I asked him, and he replied:
What do you say, Erik? Fake humor?
petrushka:
True, which is why I think “free will” should really be christened “free choice”. Too late for that, though. The former term is too entrenched.
Even if motives actually were freely chosen, there would be a regress problem. We might be free to choose our motives, but would we be free to choose how we choose our motives?
Ran across this today:
Play the Turing Test Live
Players are randomly assigned to be either the “interrogator” or the “witness”. The interrogator is the person asking questions in order to figure out which of the other two players is the human and which is the AI, and the witness is the person trying to convince the interrogator that they are the actual human, while the AI tries to do the same.
I haven’t played, but it’s a neat idea. Since they’re getting both their interrogators and their witnesses from the pool of visitors, you’ll always be able to play unless you’re the only visitor at any given time.
This project was developed by a postdoc in the UC San Diego Language and Cognition lab. They anonymize the dialogues and use them for research purposes.
My takeaway:
1. The imitation game has become entertaining, but largely irrelevant.
2. AI has not solved the problem of reliability in information and argument.
3. AI does not yet have the ability to learn and retain corrections in real time. It does not have a global and lifetime context window.
4. Parsing and producing natural language turned out to be amenable to probabilistic approaches. AI requires no theory of grammar and syntax.
5. Despite some amusing failures, AI outperforms average humans in most reasoning tasks. And it overwhelms humans in breadth of knowledge.
6. The big problem with AI is power consumption and efficiency.
7. All the ridiculed failures of AI have analogs in human behavior: faulty reasoning, misinformation, fake references. In other words, AI amplifies both the strengths and weaknesses of human intelligence. Which is a bit scary.
Today I asked AI to identify some snow footprints from my patio.
This is much better than science fiction AI.
STAR trek’s Data would never claim 100 percent confidence.
Deleted duplicate
petrushka:
That’s impressive, especially considering how difficult visual processing is for AIs.
Yeah, minus 5 points for the overconfidence. I ran across a paper about that the other day. From the abstract:
The confidence thing was sarcasm, just in case.
Testing image upload.
Edit: no luck.
I was thinking fox, because we recently imaged a fox on a security camera.
Testing
petrushka:
Yeah, it’s still broken. Here’s the workaround I’m using.
That’s weird, but if you click on the image, you can see it undistorted.
In a comment on the other thread, I invented the portmanteau ‘Sanopfizeneca’ to represent Big Pharma. It occurred to me that this would make for a good AI test, so I asked them:
keiths:
The question was out of the blue, with no context indicating that I was talking about pharmaceutical companies. Here’s how ChatGPT (aka ‘Chip’) and Claude responded:
Chip:
Chip immediately recognized it as a pharmaceutical portmanteau, understood that I intended it mockingly, threw in a Simpsons reference, and added some trenchant commentary on mergers and corporate-speak.
Claude:
keiths:
Claude:
ETA: I asked the other major AIs (Grok, Gemini, Perplexity, Copilot) and they all recognized it as a portmanteau. All but Gemini discerned my humorous intent.
Here’s Grok’s take:
petrushka:
Yeah, the blog software compresses it laterally depending on the pixel counts. I always resize to 300 x 300, which seems to work well on both computers and phones.
AI fails the Turing test because no human could compose those paragraphs so quickly..
My photo analysis took about five seconds, using an unpaid grok.
petrushka:
And right now most LLMs (or at least the ones I’ve asked) have no access to a clock. If they did, they could throttle their responses appropriately so as to appear human. Without temporal awareness, they just blurt out a response as soon as they can, which is what you want most of the time. (Although it can be pretty disconcerting to type a long, involved prompt that required a lot of thought and watch the AI respond almost instantly.) As it stands, anyone running Turing tests with LLMs must be imposing delays externally to the LLM, or else the game would be over quickly for the reason you describe.
One of my favorite diffusion model experiments is to feed it a simple starting image, such as a closeup image of an object or a face, and then instruct the camera to pull back so we can see the surroundings. The surroundings aren’t fully visible in the starting image, so the model has to use its crazy-ass imagination to fill in the gaps. The results can be interesting, or amusing, or bizarre, or any combination of the three.
I fed the model an image of an old Hamm’s bottle opener. The prompt was simple:
That was the entire prompt. I didn’t say who “they” was, leaving that up to the model’s imagination. Here’s the resulting video:
They are all staring at the bottle opener.
What could be more natural than to have a bottle opener “stared at” by other bottle openers?
Has anyone connected the term matrix multiplication with the movie?