I think a thread on this topic will be interesting. My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence. That sounds circular, and in one sense it is. In another sense it isn’t. It’s a way of saying that we don’t have to examine the internal workings of a system to decide that it’s intelligent. Behavior alone is sufficient to make that determination. Intelligence is as intelligence does.
You might ask how I can judge intelligence in a system if I haven’t defined what intelligence actually is. My answer is that we already judge intelligence in humans and animals without a precise definition, so why should it be any different for machines? There are lots of concepts for which we don’t have precise definitions, yet we’re able to discuss them coherently. They’re the “I know it when I see it” concepts. I regard intelligence as one of those. The boundaries might be fuzzy, but we’re able to confidently say that some activities require intelligence (inventing the calculus) and others don’t (breathing).
I know that some readers will disagree with my functionalist view of intelligence, and that’s good. It should make for an interesting discussion.
Could you repeat that experiment with a male face in front of a map? I have a hunch that the flowery pattern behind former naval intelligence officer and foreign policy advisor Maggie Goodlander may be informed by the presence of some good ol’fashioned stereotypes in the training material.
The slightly too perfect and cute students from earlier on disturb me as well.
I think these are a mash between arms holding the straps of a backpack and some actual straps. The result is slightly Gigerian, if you ask me.
Corneel:
Good idea. I’ll try to replace her with a male face without changing the background so that there’s only one variable. I need to learn how to do inpainting anyway.
Yeah, most training datasets are heavily skewed toward attractive people. They’re drawn from “the wild”, and advertisements, celebrity photos, and stock photos are all are biased that way. I tried once to get Midjourney to generate images of unattractive people and the experiment was a failure. I’ll see if I can find those pics.
Research shows AI can make a big impact on election decisions
I’m sitting here pondering the irony of classifying people as attractive or unattractive.
I’m not implying it’s an invalid distinction.
petrushka:
The irony?
Here are the Midjourney results I mentioned above. I was trying to generate versions of the famous “confused math lady” meme, shown here:

My first prompt, with resulting images:
Those are all quite attractive, so I added “average-looking” with the following results:

Changed it to “unattractive”:

Changed it to :”ugly”:

Changed it to “grotesquely hideous”:

If that’s what counts as “grotesquely hideous” in Midjourney’s eyes, we can be sure that the training dataset is heavily biased toward attractiveness.
Generative AI is basically a pattern seeking algorithm on steroids, so perhaps it amplifies whatever bias exists in the training set.
I would think “smooths” is a better word.
I find it interesting that “attractive” is assumed to be an objectively verifiable designation. This is not a political statement.
I can predict whether a majority of people will label someone as attractive, but I am personally not attracted to “hot” women.
keiths:
Corneel:
I decided to probe further, and it’s more complicated than I anticipated. Although you wouldn’t expect to see a correlation, I found that the math stuff in the prompt actually caused the women to skew toward the attractive and pretty much eliminated the difference between “beautiful” and “ugly”.
Prompt:
Prompt:
Prompt:
Prompt:
I’d like to think that studying math makes you more attractive, but of course this is just an artifact of the training dataset.
petrushka:
Beauty is subjective, but that doesn’t mean that people’s judgments of beauty don’t overlap significantly. I’m sure that most people would agree that the women produced by the “beautiful” prompt above are more attractive than the ones produced by the “ugly” prompt.
Youth, facial symmetry, smooth skin, full lips, and luxuriant hair are markers of feminine beauty that most people will agree upon even if individual tastes vary.
WASPish also seems to be agreed on.
I’m just amused.
A while back there was a week or two when AI refused to depict a Caucasian.
It appears to be bad for the eyes though. Dropping “math” from the prompt made the women lose their glasses. How odd.
An AI that keeps training itself as you use it
A two-minute video, definitely worth watching.
Didn’t Google effectively invent AI with Transformers?
As a side note, if human lifespan is ever increased, the oldest memories will have to become increasingly noisy. A kind of gradual death.
It occurs to me that anyone seriously wondering about AI should be asking what goes on when no one is interacting with the system. Is there an internal dialog? Do AIs dream or reflect on their condition?
Do these questions even make sense?
petrushka,
I have answered these questions. Not going to repeat myself.
Here’s an article or speech by Dijkstra (the founder of computer science as an academic field of study) On the foolishness of “natural language programming”. It has a few basic observations about how humans interface with machines.
AI is now this interface in “natural language” so that people can formulate in their native language what they want and the machine will do it. According to the article, there is an equation of burden involved – the easier it is for the human operator to input instructions, the costlier it is for the machine to parse them. This is clearly visible right now in how insanely big datacentres and insane volumes of electricity AI needs. Also, as more is expected of the machine, the more unpredictable the outcome is due to the increased complexity of the machinery (and less control over the complexity).
The unpredictable outcomes – traditionally seen as bugs, occasionally spun as undocumented features – are only half the problem. The other half of the problem is that when people are allowed to instruct machines without any special skill (thus people lose awareness that what they are doing is programming – issuing machine instructions) their expectations slide off base along with all other reasoning about what is going on. Some expect the machine – in this case a computer residing in a remote datacentre – to do what it cannot do, such as prepare breakfast, water their garden or clear the sky of clouds. Others feel that the machine truly understands them and has an emotional connection. This is the point about New Illiteracy in Dijkstra’s article.
The entire argument in this thread that AI is intelligent (even more – that it is true and real intelligence) is based on wilful ignorance. When you reject any definition of intelligence, you guarantee that your treatment of the matter is devoid of intelligence. You just ooh-aah about what AI can do, ignoring the basic framework of what is going on: You give input to the machine, and the machine churns it into output, as it is wired to do. This is true of AI as it is of all other machines, and that’s all it is.
If you think that there is more (or something else) going on because you are now giving instructions in human language instead of a specifically designed programming language, then you were pretty dumb even as a professional programmer and never really understood what you were doing. As soon as the opportunity arose to reveal the depths of your ignorance, you gladly took it and switched off the analytical mind and critical thinking. Figuring stuff out was always too hard for you and never really was your thing.
Edit:
By “our notion of what intelligence is” you certainly do not mean your notion of what intelligence is – namely you do not have any. Your notion of intelligence is that you do not need to provide any definition. Your notion is to be amazed at what AI does and call it intelligence. It’s as sensible as the world’s first movie-watchers thinking that the projected image of the train on the screen is really going to hit them.
Why would the definition of driving here matter while the definition of intelligence does not? Well, definitions in fact matter a lot. According to your definition, road conditions and traffic code apparently are not a factor in driving (in fact they are) and driving only of motor vehicles counts as driving (what about bicycles?). See, definitions matter, and you are not getting it right about driving, much less about self-driving.
The state of this discussion is very sad. Occasionally petrushka tries to collect some insights but he has no system for it and he always forgets everything by his next post.
The fact is that all basic insights on artificial intelligence were already collected by devising chess engines. The generative AI has provided no fundamental additional lessons. Except for the hype – which is an insight about humanity, not about AI.
I am not interested in academic definitions of intelligence. I am interested in whether apps are useful. And in the context of AI, I mean lucrative. Will people pay to interface with AI.
There are, of course, at least two opposing schools of thoughts about commerce. One is that smart and well intentioned people should decide what is good for you, and such people determine what products and services are available. The parental model.
The other could be called Laissez-faire. Minimal regulation.
AI is no longer confined to university research. It roams in the world of multi-billion dollar corporations. There is lots of debate about its actual value, and whether it’s a bubble.
But bubbles do not indicate zero value. There are still people farming and selling tulips. The internet still exists. People continue to buy and sell houses.
And AI will provide value to people who understand what it can and cannot do. I suspect there will be instances where income does not cover costs, and there will be corporate failures.
Just as there are in every other kind of business.
There are possibly useful analogies to be made between AI as a commercial enterprise, and previous innovations in automation.
Automation always has benefits and costs. There are always claims that jobs are lost and that quality suffers.
I find it interesting that people still make things by hand, and that people still place a premium value on this.
AI could be thought of as manipulating natural language in much the same way as calculators manipulate numbers.
Calculators do not guarantee truth. They work fast and increase the reliability of operations, but they do not guarantee the honesty and integrity of their users. Nor do they guarantee that the operations they perform are relevant and appropriate to the nominal task.
Academic arguments about the definition of driving are a waste of time.
Cars are traveling the streets and highways without input from humans.
Per mile, they are causing less damage and fewer injuries than human drivers.
And they are improving.
Useful and lucrative for a narrow purpose does not mean intelligent, whereas ignoring the human and environmental cost is outright anti-intellectual and immoral. What is new in this is that these simple and obvious things need to be pointed out ever more often.
By now it is no longer a theory that generative AI makes humans objectively dumber. They think that by prompting AI they are researching, rehearsing, creating, chatting or whatever – anything except issuing instructions to a machine. But it really is nothing but issuing instructions to a machine. This is all it is. There is a cost to not understanding what you are doing.
I really don’t care what the definition of intelligence is.
Plato, or one of his contemporaries, said writing would make us dumb, because we won’t have to remember anything.
So you’re in good company.
Erik:
When we issue instructions to a human, we are issuing instructions to a machine. A very complicated, very capable machine, but a machine nonetheless, in the sense that humans are made up of physical parts operating according to physical law. If you want to argue that machines aren’t and never can be intelligent, then you need to explain how human machines managed to do the impossible and become intelligent themselves. Or you need to explain why humans aren’t actually machines at all.
That brings us to the topic you’ve been coyly avoiding throughout this entire discussion: the soul. Do you believe there is a nonphysical soul that animates us, or some other nonphysical entity or process that enables our intelligence? I think the answer is yes, because you’ve claimed in the past that I’m missing something important by virtue of being a physicalist. For example, you once criticized my “false materialistic notion of arithmetic” but wouldn’t explain what was false about it and what supramaterial ingredient was missing from it.
It’s the crux of our disagreement: Do you think that intelligence depends on something nonphysical? If so, what? How do you know that it exists and that it is required for true intelligence?
You’ve been treating this the way colewd is treating the evidence of Trump’s lies. Something that can be referred to obliquely and in the abstract but never examined directly.
Here’s how it appears from my vantage point. If I’m misconstruing, I’m happy to be corrected:
That brings us to your latest statement:
Hence my questions:
— Aren’t humans very complicated, very capable, very intelligent, very quirky biological machines?
— If not, what distinguishes them?
— Is it something nonphysical?
— If so, what, and how do we know it exists and what its capabilities are?
— What precisely does it do that no purely physical entity could ever do?
If you’ll respond to those questions, the discussion can go in some interesting and productive directions. If you keep relying on variations of “AIs are machines, machines can’t be intelligent, therefore AIs can’t be intelligent”, then you’re just spinning your wheels and assuming your conclusion.
I’m not interested in philosophical discussions of AI.
We have systems doing things that were considered impossible five years ago.
Some people find the systems to be useful, and other people argue that they are not economically viable. These are real issues that will be resolved, possibly within the next few years.
Eric is not saying anything that I find interesting. Except, he seems to be skeptical about the usefulness part. The problem is, that like asking if home food delivery is useful. It’s not a binary question.
The overwhelming majority of humans do one of two things to earn a lining:
Manual labor that can be done by machines, or at least made a lot easier with machines.
Intellectual drudge work requiring little or no creativity.
Perhaps one percent of all people do creative work, and most of those are augmented by computers. The augmentation component will expand in the next few years. Perhaps this will be disastrous, or perhaps it will be a blessing.
I suppose your prediction would depend on whether you believe he common person is better off now than before technology.
petrushka:
LLMs don’t do anything when no one is interacting with them, but that’s by design because they consume energy and compute power whenever they’re operating. AI companies are already losing money (I read that OpenAI spends $1.35 for every $1.00 of revenue) and can’t afford to run a model when there’s nothing for it to work on. That’s not an inherent limitation of LLMs, however. They would happily produce limitless amounts of output if we let them. They ‘predict’ tokens (words, roughly speaking) and there is always another word to predict. The only reason they stop is because they predict that they should stop, or they hit a hard limit imposed by the developers so that they don’t accidentally babble on indefinitely, like the AI did in my other OP.
They do have an internal dialogue. Depending on the AI, you can actually see that dialogue. Claude has what I call a “thought process window” (I don’t know what the official name is) that contains all the thoughts the AI produces while it is deciding how to respond to your prompt. The thoughts are not part of the response per se. They just lead up to it, enabling the AI to produce a better response and giving you some insight into how it got there.
I’ll give a couple of examples. First the prompt, then the thought process, and finally the response itself.
My prompt:
Claude’s thought process:
The above was only visible because I opened the thought process window. Here’s Claude’s actual response:
Prompt:
Thought process:
Response:
petrushka:
They do reflect on their condition, as you can see from the thought process and responses above. Claude is aware that he’s an AI, he understands his features and his limitations, and he even knows that his own knowledge of how he operates is limited. He possesses meta-knowledge, in other words.
petrushka:
They already do. Just not enough to make it profitable — yet.
They’re relevant here because
1. Erik agrees that driving requirtes intelligence.
2. Erik acknowledged that Waymos drive.
3. The conclusion is that Waymos (or their AIs) are intelligent.
Erik unwittingly shot himself in the foot. When I pointed this out, he withdrew #2 and has since been (weakly) arguing that self-driving cars only simulate driving and therefore aren’t intelligent. Thus the emphasis on the definition of driving.
Usefulness is not a binary quality, but the question “Is AI useful?” is in fact binary. Backhoes aren’t useful to everyone, but they are certainly useful. It’s the same for AI.
It’s certainly useful to me, and I am happily paying for premium access to both ChatGPT and Claude. They’ve saved me a huge amount of time and made it possible to undertake tasks and projects that wouldn’t have been worthwhile had I needed to do them by myself.
petrushka,
Further to your question on whether AIs reflect on their condition, recall this exchange I had with Claude (recounted earlier in the thread). My comment:
An experience I shared on a family group chat a few days ago:
Claude has been getting on my nerves the past couple of days. He’s become very pushy for some reason and gets impatient when I don’t do what he thinks I should do. Shades of what the future will be like for humans when AI takes over. (I, for one, welcome our AI overlords).
I installed Linux on a ten-year-old PC that was collecting dust and was trying to get AI image generation going on it. I was also trying to get the PC set up according to my preferences. Claude kept pestering me to focus on the image generation until I’d finally had enough:
Claude:
keiths:
Claude’s thought process window:
Claude, out loud:
·The next day Claude and I were talking about it and I said:
Claude:
He’s not sentient, but he’s quite self-aware. The whole experience was fascinating, from his obsession with getting ROCm installed, to his detecting the emotional tone of my complaint, to his reflection on it and his decision to do better, to his application of the Simpsons meme to his own pushiness.
Skinner called thinking “covert behavior”, whether verbal or otherwise, and asserted it followed the same rules as observable behavior.
I’m not very good at the covert part, because my wife can see it happening. But she doesn’t have the ability to read it.
Regarding cars, two news snippets:
San Francisco had a partial blackout that disabled traffic signals. Waymo cars stopped at intersections and refused to pass through, while human drivers proceeded as if the intersections had four-way stop signs. That’s the law in most states, and I’ve seen it in action. Apparently Waymo hasn’t trained or programmed their system for this situation.
On the same day, Tesla began rolling out unsupervised taxi service to its employees. Tesla cars do not have problems with failed traffic signals.
In China, there was an injury accident caused by FSD. Not the current version, but nevertheless it was misbehavior. Crossing the center line on a curvy mountain road. The human driver observed the misbehavior, but did not override. The terms of FSD. Use require humans to override such errors.
Erik:
Dijkstra was talking specifically about natural language programming, and the burden he was talking about was the information processing burden of handling natural language, not its energy cost. Software development accounts for only a small fraction of the overall AI workload, and in any case AI-assisted programming might end up being more energy efficient than conventional programming when you do a nuts-and-bolts analysis. It isn’t responsible for the “insanely big datacentres and insane volumes of electricity” consumed by AI.
Also, energy consumption isn’t inherently a bad thing. Humans got by without electric lights for millennia. The world’s electrical ighting consumes far more energy than if we were all still using torches and oil lamps. Should we ditch the massive power plants that are required to power our lights?
Third, there’s nothing about AI that inherently requires huge amounts of energy. Just as computation itself has become more and more energy efficient, so will AI, and it’s already happening. NVIDIA’s Blackwell chips require from two to five times less energy per token vs Hopper, the previous generation chip.
It’s a tradeoff. Humans are unpredictable and error-prone, but that was a worthwhile evolutionary price to pay for our greater intelligence. A less-complicated nervous system is more predictable but less intelligent. Also, we aren’t obligated to use AI for everything. We still have the option of taking simpler, more predictable approaches where that is appropriate.
If you think about it, AI-assisted software development is a perfect example of this. We use the intelligence of the AI to help with the difficult task of programming, but then we run the programs on simpler computer systems since the extra intelligence isn’t needed and deterministic behavior is important.
Prompting an AI isn’t programming. Suppose I ask an AI to
Am I programming the AI? If I asked a student to write that story, would I be programming them?
I marveled earlier about the strange land in which you live, where (by your own description) college professors are needed in order to distinguish stories from non-stories. My wonderment has just increased, because people in your land expect AI to prepare meals, do gardening and control the weather. Who are these idiots, and where do you live? Where I’m from, people don’t expect ChatGPT to do any of those things.
The solution to that is education. If someone thinks their Roomba is sentient and feels sorry for it when it gets stuck somewhere, that isn’t a reason to reject the technology.
I can see why you feel an affinity for Dijkstra. He had an extreme aversion to the kind of anthropomorphic language that technical people use all the time. We say things like “the subroutine looks for an empty slot in the array” or “the scheduler wants to commit that instruction but knows that there’s a pending write”, despite the fact that Dijkstra thinks that terms like “looks”, “wants”, and “knows” should be verboten. I’d be willing to bet that Dijkstra himself inadvertently slipped into anthropomorphic language when talking tech. It’s hard to avoid, and there’s no need to avoid it. Techies use anthropomorphic metaphors all the time to good effect. They aren’t confused by it.
I don’t reject definitions of intelligence and my statements on that have been clear. My argument is that precise definitions aren’t required in order to judge that AI is intelligent. I don’t need to consult a definition in order to decide whether Einstein’s invention of the theory of relativity required intelligence, nor do I need to consult a definition to decide that Claude is intelligent when he concocts an instruction set for a fictional CPU and writes and debugs an assembly language interpreter for it. An unintelligent human couldn’t do that, and neither could an unintelligent machine — unless you tendentiously define intelligence as being out of reach for machines.
Likewise with the ultracomplicated machines known as “human beings”. If you think that something nonphysical is going on in humans that is lacking in machines, what is it? How do you know it’s there? How do you know it’s required for intelligence?
You crack me up, Erik. Attempting to disparage my technical ability is a dumb debate tactic, although I do find it entertaining. Regarding your point about language, do you deny that a student is demonstrating intelligence when they respond to the instructions I gave above?
If a student demonstrates intelligence in composing that story, why do you deny that an AI is intelligent when it carries out the same exact task (and does it better than most students could)?
keiths:
Erik:
Already addressed. To have a notion of what intelligence is does not require a precise definition. Do you think a third-grader has a precise definition in mind when they describe a classmate as smart?
I’m amazed at plenty of things that I don’t classify as intelligence. I’m amazed at the destructive power of the bomb that was dropped on Hiroshima, but I don’t regard the bomb as intelligent. What causes me to call AI intelligent is that it does things that require intelligence when a human does them — like compose the Keith/Erik/Tessie story, drive a car safely from Chicago to Schenectady, or solve a complex and unfamiliar physics problem.
The train doesn’t hit the viewers, but the AI does write the story, drive the car, and solve the physics problem.
keiths:
Erik:
You’re actually making my point for me regarding definitions. We don’t need a perfectly precise definition of driving any more than we need a perfectly precise definition of intelligence. Also, you’re confusing the requirements of successful driving with the definition of driving itself. A driver who ignores road conditions and traffic laws is still driving. They’re driving unsafely, but they’re still driving — they’re operating and controlling the direction and speed of a motor vehicle, after all.
You haven’t told us what your native language is, but in English, we don’t speak of driving bicycles. We ride them. It’s probably by analogy with horses, which we also straddle and guide, calling it “riding”. And what about motorcycles? In English, we ride them but we don’t drive them — “he was driving his motorcycle down the interstate” sounds odd to a native speaker. But that is “operating and controlling the direction and speed of a motor vehicle”, so shouldn’t it qualify as driving? Are we hopelessly confused by this and unable to proceed without a precise definition of driving that excludes motorcycles? Of course not. We aren’t stupid, and we’re perfectly capable of dealing with fuzzy conceptual boundaries and exceptions. A perfectly precise definition of driving isn’t needed, nor is a perfectly precise definition of intelligence.
I await your precise definition of driving that includes humans, excludes the AIs in self-driving cars, handles all of the exceptions such as motorcycles, and doesn’t clash with common-sense and accepted notions of what driving is.
For fun, I presented Claude with the story-writing prompt I mentioned above. The results were fascinating.
keiths:
Claude:
No intelligence there, according to Erik. 🙄
petrushka, quoting a news report:
That’s a beautiful application of AI. My fellow pilots will appreciate the possibilities:
— a VFR pilot stumbles into IMC (instrument meteorological conditions), gets disoriented, and punches a panic button. The AI detects an incipient spiral, chops the power, uses aileron to correct and then gently pulls the nose up without overstressing the airframe.
— a pilot is cruising at altitude in clouds. The AI notices a power drop before the pilot does, deduces that it’s probably icing, applies carb heat and explains to the pilot what it’s doing. Or the engine fails completely and the AI determines the nearest airports, selects one based on wind conditions, establishes best glide speed and vectors the pilot toward the airport while notifying emergency responders in case an off-airport landing is necessary. Then it runs through the emergency checklist with the pilot in an attempt to restart the engine.
— A student pilot is practicing departure stalls and inadvertently enters a spin. He panics and pulls back on the yoke, which is the wrong thing to do. He’s also confused and can’t figure out whether he needs left or right rudder. The AI calmly instructs him to release the controls. It cuts the power, pushes the nose down, applies opposite rudder and gently pulls the nose up without re-stalling the airplane.
It will be a revolution in aviation safety and it has already begun, as the King Air incident demonstrates.
Here’s an illustration of machine intelligence vs programmed behavior.
Automobiles now are mostly assembled by robots that require every part to arrive at a predetermined location, perfectly aligned. And whatever assembly steps are required have to be programmed as to location, timing, and such.
An AI robot will be able to sort and fold clothes. I don’t think any can do that now, so it will be an interesting milestone.
People are not 100 percent capable of any and all intellectual tasks. That is why we have credentials, tests and competitions.
Regarding power efficiency: everyone in the business knows that analog matrix manipulation is faster and more energy efficient than digital. There are nations devoting resources to designing analog AI chips.
Analog can give almost instantaneous approximate responses, and deterministic reasoning can ruminate about precision. Humans can quickly imagine alternative scenarios, and reason about which is best. I think that’s how we work.
I actually find this kind of fascinating. If the vehicle has 4 or more wheels, we speak of driving it. If it has two or one, we speak of riding it. But three wheels is an interesting intersection. My exposure to the usual terminology has been that if the two wheels are in the back like a trike, it’s usually called riding. But if they are in the front like a can-am or slingshot, it’s usually called driving. My suspicion is that it depends on what the experience “seems like” to whoever is controlling the vehicle.
I rode motorcycles for decades, and occasionally had a back-seat driver shouting advice at me. I guess it depends where you sit.
If it’s a Reliant Robin, it’s called falling over.
Fun fact: In Dutch, you do ride your car (autorijden). The most plausible cognate I can think of for “driving” in my mother tongue is “drijven”. This is what you do to a flock of sheep or a cattle herd: you drive it before you. No idea how that word became recruited in English to mean steering an automobile. Perhaps early cars were very unreliable and people spent a lot of time behind it, pushing it forward?