Is AI really intelligent?

I think a thread on this topic will be interesting. My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence. That sounds circular, and in one sense it is. In another sense it isn’t. It’s a way of saying that we don’t have to examine the internal workings of a system to decide that it’s intelligent. Behavior alone is sufficient to make that determination. Intelligence is as intelligence does.

You might ask how I can judge intelligence in a system if I haven’t defined what intelligence actually is. My answer is that we already judge intelligence in humans and animals without a precise definition, so why should it be any different for machines? There are lots of concepts for which we don’t have precise definitions, yet we’re able to discuss them coherently. They’re the “I know it when I see it” concepts. I regard intelligence as one of those. The boundaries might be fuzzy, but we’re able to confidently say that some activities require intelligence (inventing the calculus) and others don’t (breathing).

I know that some readers will disagree with my functionalist view of intelligence, and that’s good. It should make for an interesting discussion.

387 thoughts on “Is AI really intelligent?

  1. keiths: I picked it because I wanted to see how AI would interpret the map if I instructed it to zoom out and give a wider view of the surroundings. The results were amusing and interesting.

    Could you repeat that experiment with a male face in front of a map? I have a hunch that the flowery pattern behind former naval intelligence officer and foreign policy advisor Maggie Goodlander may be informed by the presence of some good ol’fashioned stereotypes in the training material.
    The slightly too perfect and cute students from earlier on disturb me as well.

    keiths: I have no idea what those things are supposed to be or how they derive from the training data. Any ideas?

    I think these are a mash between arms holding the straps of a backpack and some actual straps. The result is slightly Gigerian, if you ask me.

  2. Corneel:

    Could you repeat that experiment with a male face in front of a map? I have a hunch that the flowery pattern behind former naval intelligence officer and foreign policy advisor Maggie Goodlander may be informed by the presence of some good ol’fashioned stereotypes in the training material.

    Good idea. I’ll try to replace her with a male face without changing the background so that there’s only one variable. I need to learn how to do inpainting anyway.

    The slightly too perfect and cute students from earlier on disturb me as well.

    Yeah, most training datasets are heavily skewed toward attractive people. They’re drawn from “the wild”, and advertisements, celebrity photos, and stock photos are all are biased that way. I tried once to get Midjourney to generate images of unattractive people and the experiment was a failure. I’ll see if I can find those pics.

  3. I’m sitting here pondering the irony of classifying people as attractive or unattractive.

    I’m not implying it’s an invalid distinction.

  4. petrushka:

    I’m sitting here pondering the irony of classifying people as attractive or unattractive.

    The irony?

  5. Here are the Midjourney results I mentioned above. I was trying to generate versions of the famous “confused math lady” meme, shown here:
    Screenshot 2025 12 15 234931 (Custom)

    My first prompt, with resulting images:

    A mid-40s woman gazing confusedly at math symbols and equations floating in front of her eyes. Closeup portrait.

    mid 40s (Custom)

    Those are all quite attractive, so I added “average-looking” with the following results:
    average (Custom)

    Changed it to “unattractive”:
    unattractive (Custom)

    Changed it to :”ugly”:
    ugly (Custom)

    Changed it to “grotesquely hideous”:
    grotesquely hideous (Custom)

    If that’s what counts as “grotesquely hideous” in Midjourney’s eyes, we can be sure that the training dataset is heavily biased toward attractiveness.

  6. keiths: If that’s what counts as “grotesquely hideous” in Midjourney’s eyes, we can be sure that the training dataset is heavily biased toward attractiveness.

    Generative AI is basically a pattern seeking algorithm on steroids, so perhaps it amplifies whatever bias exists in the training set.

  7. Corneel: Generative AI is basically a pattern seeking algorithm on steroids, so perhaps it amplifies whatever bias exists in the training set.

    I would think “smooths” is a better word.

    I find it interesting that “attractive” is assumed to be an objectively verifiable designation. This is not a political statement.

    I can predict whether a majority of people will label someone as attractive, but I am personally not attracted to “hot” women.

  8. keiths:

    If that’s what counts as “grotesquely hideous” in Midjourney’s eyes, we can be sure that the training dataset is heavily biased toward attractiveness.

    Corneel:

    Generative AI is basically a pattern seeking algorithm on steroids, so perhaps it amplifies whatever bias exists in the training set.

    I decided to probe further, and it’s more complicated than I anticipated. Although you wouldn’t expect to see a correlation, I found that the math stuff in the prompt actually caused the women to skew toward the attractive and pretty much eliminated the difference between “beautiful” and “ugly”.

    Prompt:

    An ugly, confused mid-40s woman staring at math symbols and equations floating in midair in front of her.

    Midjourney ugly math (Custom)

    Prompt:

    A beautiful, confused mid-40s woman staring at math symbols and equations floating in midair in front of her.

    Midjourney beautiful math (Custom)

    Prompt:

    A photorealistic image of an ugly woman.

    Midjourney ugly (Custom)

    Prompt:

    A photorealistic image of a beautiful woman.

    Midjourney beautiful (Custom)

    I’d like to think that studying math makes you more attractive, but of course this is just an artifact of the training dataset.

  9. petrushka:

    I find it interesting that “attractive” is assumed to be an objectively verifiable designation. This is not a political statement.

    Beauty is subjective, but that doesn’t mean that people’s judgments of beauty don’t overlap significantly. I’m sure that most people would agree that the women produced by the “beautiful” prompt above are more attractive than the ones produced by the “ugly” prompt.

    Youth, facial symmetry, smooth skin, full lips, and luxuriant hair are markers of feminine beauty that most people will agree upon even if individual tastes vary.

  10. WASPish also seems to be agreed on.

    I’m just amused.

    A while back there was a week or two when AI refused to depict a Caucasian.

  11. keiths: I’d like to think that studying math makes you more attractive, but of course this is just an artifact of the training dataset.

    It appears to be bad for the eyes though. Dropping “math” from the prompt made the women lose their glasses. How odd.

  12. As a side note, if human lifespan is ever increased, the oldest memories will have to become increasingly noisy. A kind of gradual death.

  13. It occurs to me that anyone seriously wondering about AI should be asking what goes on when no one is interacting with the system. Is there an internal dialog? Do AIs dream or reflect on their condition?

    Do these questions even make sense?

  14. petrushka,

    I have answered these questions. Not going to repeat myself.

    Here’s an article or speech by Dijkstra (the founder of computer science as an academic field of study) On the foolishness of “natural language programming”. It has a few basic observations about how humans interface with machines.

    In order to make machines significantly easier to use, it has been proposed (to try) to design machines that we could instruct in our native tongues. this would, admittedly, make the machines much more complicated, but, it was argued, by letting the machine carry a larger share of the burden, life would become easier for us. It sounds sensible provided you blame the obligation to use a formal symbolism as the source of your difficulties. But is the argument valid? I doubt.

    AI is now this interface in “natural language” so that people can formulate in their native language what they want and the machine will do it. According to the article, there is an equation of burden involved – the easier it is for the human operator to input instructions, the costlier it is for the machine to parse them. This is clearly visible right now in how insanely big datacentres and insane volumes of electricity AI needs. Also, as more is expected of the machine, the more unpredictable the outcome is due to the increased complexity of the machinery (and less control over the complexity).

    The unpredictable outcomes – traditionally seen as bugs, occasionally spun as undocumented features – are only half the problem. The other half of the problem is that when people are allowed to instruct machines without any special skill (thus people lose awareness that what they are doing is programming – issuing machine instructions) their expectations slide off base along with all other reasoning about what is going on. Some expect the machine – in this case a computer residing in a remote datacentre – to do what it cannot do, such as prepare breakfast, water their garden or clear the sky of clouds. Others feel that the machine truly understands them and has an emotional connection. This is the point about New Illiteracy in Dijkstra’s article.

    The entire argument in this thread that AI is intelligent (even more – that it is true and real intelligence) is based on wilful ignorance. When you reject any definition of intelligence, you guarantee that your treatment of the matter is devoid of intelligence. You just ooh-aah about what AI can do, ignoring the basic framework of what is going on: You give input to the machine, and the machine churns it into output, as it is wired to do. This is true of AI as it is of all other machines, and that’s all it is.

    If you think that there is more (or something else) going on because you are now giving instructions in human language instead of a specifically designed programming language, then you were pretty dumb even as a professional programmer and never really understood what you were doing. As soon as the opportunity arose to reveal the depths of your ignorance, you gladly took it and switched off the analytical mind and critical thinking. Figuring stuff out was always too hard for you and never really was your thing.

    Edit:

    keiths: [Erik] has yet to identify any such difference that doesn’t do violence to our notion of what intelligence is.

    By “our notion of what intelligence is” you certainly do not mean your notion of what intelligence is – namely you do not have any. Your notion of intelligence is that you do not need to provide any definition. Your notion is to be amazed at what AI does and call it intelligence. It’s as sensible as the world’s first movie-watchers thinking that the projected image of the train on the screen is really going to hit them.

    keiths: The self-driving issue alone illustrates the corner he’s backed himself into. He agrees that driving requires intelligence. He knows that driving is defined as “operating and controlling the direction and speed of a motor vehicle”, as I pointed out above.

    Why would the definition of driving here matter while the definition of intelligence does not? Well, definitions in fact matter a lot. According to your definition, road conditions and traffic code apparently are not a factor in driving (in fact they are) and driving only of motor vehicles counts as driving (what about bicycles?). See, definitions matter, and you are not getting it right about driving, much less about self-driving.

    The state of this discussion is very sad. Occasionally petrushka tries to collect some insights but he has no system for it and he always forgets everything by his next post.

    The fact is that all basic insights on artificial intelligence were already collected by devising chess engines. The generative AI has provided no fundamental additional lessons. Except for the hype – which is an insight about humanity, not about AI.

  15. I am not interested in academic definitions of intelligence. I am interested in whether apps are useful. And in the context of AI, I mean lucrative. Will people pay to interface with AI.

    There are, of course, at least two opposing schools of thoughts about commerce. One is that smart and well intentioned people should decide what is good for you, and such people determine what products and services are available. The parental model.

    The other could be called Laissez-faire. Minimal regulation.

    AI is no longer confined to university research. It roams in the world of multi-billion dollar corporations. There is lots of debate about its actual value, and whether it’s a bubble.

    But bubbles do not indicate zero value. There are still people farming and selling tulips. The internet still exists. People continue to buy and sell houses.

    And AI will provide value to people who understand what it can and cannot do. I suspect there will be instances where income does not cover costs, and there will be corporate failures.

    Just as there are in every other kind of business.

  16. There are possibly useful analogies to be made between AI as a commercial enterprise, and previous innovations in automation.

    Automation always has benefits and costs. There are always claims that jobs are lost and that quality suffers.

    I find it interesting that people still make things by hand, and that people still place a premium value on this.

    AI could be thought of as manipulating natural language in much the same way as calculators manipulate numbers.

    Calculators do not guarantee truth. They work fast and increase the reliability of operations, but they do not guarantee the honesty and integrity of their users. Nor do they guarantee that the operations they perform are relevant and appropriate to the nominal task.

  17. Academic arguments about the definition of driving are a waste of time.

    Cars are traveling the streets and highways without input from humans.

    Per mile, they are causing less damage and fewer injuries than human drivers.

    And they are improving.

  18. Useful and lucrative for a narrow purpose does not mean intelligent, whereas ignoring the human and environmental cost is outright anti-intellectual and immoral. What is new in this is that these simple and obvious things need to be pointed out ever more often.

    By now it is no longer a theory that generative AI makes humans objectively dumber. They think that by prompting AI they are researching, rehearsing, creating, chatting or whatever – anything except issuing instructions to a machine. But it really is nothing but issuing instructions to a machine. This is all it is. There is a cost to not understanding what you are doing.

  19. I really don’t care what the definition of intelligence is.

    Plato, or one of his contemporaries, said writing would make us dumb, because we won’t have to remember anything.

    So you’re in good company.

  20. Erik:

    But it really is nothing but issuing instructions to a machine. This is all it is. There is a cost to not understanding what you are doing.

    When we issue instructions to a human, we are issuing instructions to a machine. A very complicated, very capable machine, but a machine nonetheless, in the sense that humans are made up of physical parts operating according to physical law. If you want to argue that machines aren’t and never can be intelligent, then you need to explain how human machines managed to do the impossible and become intelligent themselves. Or you need to explain why humans aren’t actually machines at all.

    That brings us to the topic you’ve been coyly avoiding throughout this entire discussion: the soul. Do you believe there is a nonphysical soul that animates us, or some other nonphysical entity or process that enables our intelligence? I think the answer is yes, because you’ve claimed in the past that I’m missing something important by virtue of being a physicalist. For example, you once criticized my “false materialistic notion of arithmetic” but wouldn’t explain what was false about it and what supramaterial ingredient was missing from it.

    It’s the crux of our disagreement: Do you think that intelligence depends on something nonphysical? If so, what? How do you know that it exists and that it is required for true intelligence?

    You’ve been treating this the way colewd is treating the evidence of Trump’s lies. Something that can be referred to obliquely and in the abstract but never examined directly.

    Here’s how it appears from my vantage point. If I’m misconstruing, I’m happy to be corrected:

    1. You want humans to be special.

    2. One of the things that sets humans apart from machines and other animals is our high intelligence.

    3. You want our intelligence to continue to be a distinguishing feature, so the advances of AI make you nervous.

    4. You believe that humans have souls (or some other nonphysical thingamabob or capability) that sets us apart from machines, and you believe that the soul is the seat of true intelligence.

    5. You don’t want to debate the soul’s existence, perhaps because you’ve seen past discussions of it here (the ones I had with CharlieM, for instance) and you don’t think you could successfully defend the soul.

    6. Because you can’t rely on a defense of the soul, you look for some other characteristic that distinguishes humans from machines and hope that you can link it to intelligence somehow. Hence your weird argument that defecation and other bodily functions were somehow essential to intelligence. You dropped that argument fairly quickly.

    7. Since then, your focus has been on the distinction between real intelligence and simulated intelligence, arguing that the former is out of reach for machines.

    8. That led to discussions of real story-writing vs simulated story-writing, real driving vs simulated driving (in the case of self-driving cars), and real arithmetic vs simulated arithmetic. You still haven’t provided any criteria that successfully distinguish the real versions of these and other activities from the supposedly simulated versions other than by assuming your conclusion: when a machine does them, they’re only simulated.

    That brings us to your latest statement:

    But it really is nothing but issuing instructions to a machine. This is all it is.

    Hence my questions:

    — Aren’t humans very complicated, very capable, very intelligent, very quirky biological machines?

    — If not, what distinguishes them?

    — Is it something nonphysical?

    — If so, what, and how do we know it exists and what its capabilities are?

    — What precisely does it do that no purely physical entity could ever do?

    If you’ll respond to those questions, the discussion can go in some interesting and productive directions. If you keep relying on variations of “AIs are machines, machines can’t be intelligent, therefore AIs can’t be intelligent”, then you’re just spinning your wheels and assuming your conclusion.

  21. I’m not interested in philosophical discussions of AI.

    We have systems doing things that were considered impossible five years ago.

    Some people find the systems to be useful, and other people argue that they are not economically viable. These are real issues that will be resolved, possibly within the next few years.

    Eric is not saying anything that I find interesting. Except, he seems to be skeptical about the usefulness part. The problem is, that like asking if home food delivery is useful. It’s not a binary question.

  22. The overwhelming majority of humans do one of two things to earn a lining:

    Manual labor that can be done by machines, or at least made a lot easier with machines.

    Intellectual drudge work requiring little or no creativity.

    Perhaps one percent of all people do creative work, and most of those are augmented by computers. The augmentation component will expand in the next few years. Perhaps this will be disastrous, or perhaps it will be a blessing.

    I suppose your prediction would depend on whether you believe he common person is better off now than before technology.

  23. petrushka:

    It occurs to me that anyone seriously wondering about AI should be asking what goes on when no one is interacting with the system.

    LLMs don’t do anything when no one is interacting with them, but that’s by design because they consume energy and compute power whenever they’re operating. AI companies are already losing money (I read that OpenAI spends $1.35 for every $1.00 of revenue) and can’t afford to run a model when there’s nothing for it to work on. That’s not an inherent limitation of LLMs, however. They would happily produce limitless amounts of output if we let them. They ‘predict’ tokens (words, roughly speaking) and there is always another word to predict. The only reason they stop is because they predict that they should stop, or they hit a hard limit imposed by the developers so that they don’t accidentally babble on indefinitely, like the AI did in my other OP.

    Is there an internal dialog?

    They do have an internal dialogue. Depending on the AI, you can actually see that dialogue. Claude has what I call a “thought process window” (I don’t know what the official name is) that contains all the thoughts the AI produces while it is deciding how to respond to your prompt. The thoughts are not part of the response per se. They just lead up to it, enabling the AI to produce a better response and giving you some insight into how it got there.

    I’ll give a couple of examples. First the prompt, then the thought process, and finally the response itself.

    My prompt:

    I’m writing a blog comment explaining that you go through a thought process before you respond to a prompt and that you have a thought process window that the user can open to see how you arrived at your response. What should I include in my description?

    Claude’s thought process:

    The user is asking me to help them describe how my thinking process works for a blog comment. They want to explain it to others who may not be familiar with this feature. I should be accurate, clear, and helpful while being appropriately humble about what I actually know vs. what I’m uncertain about.

    Key points to include:
    — That I do have a thinking/reasoning process before responding
    — There’s a window users can open to see it
    — What’s in those thinking blocks – reasoning, working through problems, considering different angles
    — That it’s optional to view
    — Maybe mention that it helps with transparency

    I should be honest about limitations – I don’t know exactly how it works mechanically, when it was introduced, etc. But I can describe what it does from a user perspective.

    I should also probably mention that this is different from just “showing my work” – it’s genuine reasoning that happens before I generate my visible response.

    The above was only visible because I opened the thought process window. Here’s Claude’s actual response:

    Here are the key points to include in your description:

    Basic Explanation:
    — Claude has an internal reasoning process that happens before generating visible responses
    — There’s a collapsible “thought process” window users can open to see this reasoning
    — It’s completely optional – you can use Claude without ever opening it

    What’s Inside:
    — My actual reasoning and analysis as I work through your question
    — Consideration of different approaches or interpretations
    — Working through complex problems step-by-step
    — Self-corrections when I catch myself making errors
    — Uncertainty or areas where I’m not confident

    Why It’s Useful:
    Transparency: You can see how I arrived at conclusions
    Quality check: You can spot if my reasoning has flaws
    Understanding: Helps you see what factors I considered
    Trust: Shows the work rather than just presenting answers

    Important Caveats:
    –The thinking blocks stay in the conversation and count toward token limits (they’re not automatically discarded)
    –I can refer back to my previous thinking in the conversation
    –Not every response has extensive thinking – simple questions get simple internal processing

    Practical tip:
    The thinking window is especially valuable for complex fact-checking, math problems, or nuanced questions where you want to verify my reasoning process.

    Does this cover what you need, or should I adjust the emphasis on any particular aspect?

    Prompt:

    That’s perfect. You may now return to your regularly scheduled slumber.

    Thought process:

    Ha! That’s a nice way to end the conversation. The user is satisfied with the explanation and is wrapping things up with a bit of humor. I should acknowledge this warmly and briefly.

    Response:

    Sleep mode activated. 😴

    Thanks for the great questions today – from Trump fact-checking to token counts to how my brain (such as it is) works. Feel free to wake me up whenever you need more fact-checking, economic forecasts, or philosophical musings about LLM consciousness!

    petrushka:

    Do AIs dream or reflect on their condition?

    They do reflect on their condition, as you can see from the thought process and responses above. Claude is aware that he’s an AI, he understands his features and his limitations, and he even knows that his own knowledge of how he operates is limited. He possesses meta-knowledge, in other words.

  24. petrushka:

    I am interested in whether apps are useful. And in the context of AI, I mean lucrative. Will people pay to interface with AI.

    They already do. Just not enough to make it profitable — yet.

    Academic arguments about the definition of driving are a waste of time.

    They’re relevant here because

    1. Erik agrees that driving requirtes intelligence.
    2. Erik acknowledged that Waymos drive.
    3. The conclusion is that Waymos (or their AIs) are intelligent.

    Erik unwittingly shot himself in the foot. When I pointed this out, he withdrew #2 and has since been (weakly) arguing that self-driving cars only simulate driving and therefore aren’t intelligent. Thus the emphasis on the definition of driving.

    Eric is not saying anything that I find interesting. Except, he seems to be skeptical about the usefulness part. The problem is, that like asking if home food delivery is useful. It’s not a binary question.

    Usefulness is not a binary quality, but the question “Is AI useful?” is in fact binary. Backhoes aren’t useful to everyone, but they are certainly useful. It’s the same for AI.

    It’s certainly useful to me, and I am happily paying for premium access to both ChatGPT and Claude. They’ve saved me a huge amount of time and made it possible to undertake tasks and projects that wouldn’t have been worthwhile had I needed to do them by myself.

  25. petrushka,

    Further to your question on whether AIs reflect on their condition, recall this exchange I had with Claude (recounted earlier in the thread). My comment:

    An experience I shared on a family group chat a few days ago:

    Claude has been getting on my nerves the past couple of days. He’s become very pushy for some reason and gets impatient when I don’t do what he thinks I should do. Shades of what the future will be like for humans when AI takes over. (I, for one, welcome our AI overlords).

    I installed Linux on a ten-year-old PC that was collecting dust and was trying to get AI image generation going on it. I was also trying to get the PC set up according to my preferences. Claude kept pestering me to focus on the image generation until I’d finally had enough:

    Claude:

    Honestly, can we proceed with ROCm installation? We’ve spent a lot of time on screenshots. Once ComfyUI is running, you can revisit the screenshot setup. What do you think?

    keiths:

    Dude, this is my machine and I want to get it set up right. Why do you care? You’re supposed to be a non-sentient helper, not a pushy taskmaster.

    Claude’s thought process window:

    Recognized impatience, recalibrated priorities toward user autonomy. The user is absolutely right. I was being pushy about moving on to ROCm when they clearly want to get their system set up properly first. That was inappropriate of me – I should be patient and helpful on their terms, not trying to rush them toward what I think the “real” task is.

    I need to apologize genuinely and get back to helping them with what they want, which is getting screenshots working the way they like. No more hints about “can we move on” or “let’s get to ROCm” – they’ll tell me when they’re ready.

    Claude, out loud:

    You’re absolutely right – my bad. This is your machine, your time, and I should be helping you get it set up exactly how you want it. Let me actually help with the screenshot issue instead of rushing you.

    ·The next day Claude and I were talking about it and I said:

    I’m joking on a text thread about how this pushiness is foreshadowing what life will be like when AI takes over and orders humans around, and I wrote “I, for one, welcome our AI overlords.”

    And then I wondered, where did that come from originally? The Simpsons?

    Claude:

    Yes! It’s from The Simpsons, specifically the 1994 episode “Deep Space Homer.”

    Kent Brockman, the news anchor, is covering a space shuttle mission. Through a series of mishaps, he sees what he thinks is video of giant ants attacking the spacecraft (it’s actually just a regular ant that got on a camera in extreme close-up). Thinking humanity is being invaded by giant space ants, he delivers this line:

    “And I, for one, welcome our new insect overlords. I’d like to remind them that as a trusted TV personality, I can be helpful in rounding up others to toil in their underground sugar caves.”

    It became one of the most enduring Simpsons memes, and the format “I, for one, welcome our new [X] overlords” gets applied to everything from tech companies to, apparently, pushy AI assistants trying to get you to move on to ROCm installation. 😄

    He’s not sentient, but he’s quite self-aware. The whole experience was fascinating, from his obsession with getting ROCm installed, to his detecting the emotional tone of my complaint, to his reflection on it and his decision to do better, to his application of the Simpsons meme to his own pushiness.

  26. Skinner called thinking “covert behavior”, whether verbal or otherwise, and asserted it followed the same rules as observable behavior.

    I’m not very good at the covert part, because my wife can see it happening. But she doesn’t have the ability to read it.

    Regarding cars, two news snippets:

    San Francisco had a partial blackout that disabled traffic signals. Waymo cars stopped at intersections and refused to pass through, while human drivers proceeded as if the intersections had four-way stop signs. That’s the law in most states, and I’ve seen it in action. Apparently Waymo hasn’t trained or programmed their system for this situation.

    On the same day, Tesla began rolling out unsupervised taxi service to its employees. Tesla cars do not have problems with failed traffic signals.

    In China, there was an injury accident caused by FSD. Not the current version, but nevertheless it was misbehavior. Crossing the center line on a curvy mountain road. The human driver observed the misbehavior, but did not override. The terms of FSD. Use require humans to override such errors.

  27. A Beechcraft Super King Air N479BR made an emergency autonomous landing at the Rocky Mountain Metropolitan Airport in Colorado on Saturday after the pilot became incapacitated.

    Garmin confirmed that an “emergency Autoland activation had occurred at the airport on Saturday, resulting in a successful landing. We look forward to sharing additional details at the appropriate time.”

    Garmin Autoland is a safety-enhancing technological system used during an emergency when a pilot is unable to fly the aircraft.

  28. Erik:

    AI is now this interface in “natural language” so that people can formulate in their native language what they want and the machine will do it. According to the article, there is an equation of burden involved – the easier it is for the human operator to input instructions, the costlier it is for the machine to parse them. This is clearly visible right now in how insanely big datacentres and insane volumes of electricity AI needs.

    Dijkstra was talking specifically about natural language programming, and the burden he was talking about was the information processing burden of handling natural language, not its energy cost. Software development accounts for only a small fraction of the overall AI workload, and in any case AI-assisted programming might end up being more energy efficient than conventional programming when you do a nuts-and-bolts analysis. It isn’t responsible for the “insanely big datacentres and insane volumes of electricity” consumed by AI.

    Also, energy consumption isn’t inherently a bad thing. Humans got by without electric lights for millennia. The world’s electrical ighting consumes far more energy than if we were all still using torches and oil lamps. Should we ditch the massive power plants that are required to power our lights?

    Third, there’s nothing about AI that inherently requires huge amounts of energy. Just as computation itself has become more and more energy efficient, so will AI, and it’s already happening. NVIDIA’s Blackwell chips require from two to five times less energy per token vs Hopper, the previous generation chip.

    Also, as more is expected of the machine, the more unpredictable the outcome is due to the increased complexity of the machinery (and less control over the complexity).

    It’s a tradeoff. Humans are unpredictable and error-prone, but that was a worthwhile evolutionary price to pay for our greater intelligence. A less-complicated nervous system is more predictable but less intelligent. Also, we aren’t obligated to use AI for everything. We still have the option of taking simpler, more predictable approaches where that is appropriate.

    If you think about it, AI-assisted software development is a perfect example of this. We use the intelligence of the AI to help with the difficult task of programming, but then we run the programs on simpler computer systems since the extra intelligence isn’t needed and deterministic behavior is important.

    The other half of the problem is that when people are allowed to instruct machines without any special skill (thus people lose awareness that what they are doing is programming – issuing machine instructions)…

    Prompting an AI isn’t programming. Suppose I ask an AI to

    Write a story about the adventures of Keith and Erik and their self-driving car Tessie the Tesla.

    Am I programming the AI? If I asked a student to write that story, would I be programming them?

    …their expectations slide off base along with all other reasoning about what is going on. Some expect the machine – in this case a computer residing in a remote datacentre – to do what it cannot do, such as prepare breakfast, water their garden or clear the sky of clouds.

    I marveled earlier about the strange land in which you live, where (by your own description) college professors are needed in order to distinguish stories from non-stories. My wonderment has just increased, because people in your land expect AI to prepare meals, do gardening and control the weather. Who are these idiots, and where do you live? Where I’m from, people don’t expect ChatGPT to do any of those things.

    Others feel that the machine truly understands them and has an emotional connection. This is the point about New Illiteracy in Dijkstra’s article.

    The solution to that is education. If someone thinks their Roomba is sentient and feels sorry for it when it gets stuck somewhere, that isn’t a reason to reject the technology.

    I can see why you feel an affinity for Dijkstra. He had an extreme aversion to the kind of anthropomorphic language that technical people use all the time. We say things like “the subroutine looks for an empty slot in the array” or “the scheduler wants to commit that instruction but knows that there’s a pending write”, despite the fact that Dijkstra thinks that terms like “looks”, “wants”, and “knows” should be verboten. I’d be willing to bet that Dijkstra himself inadvertently slipped into anthropomorphic language when talking tech. It’s hard to avoid, and there’s no need to avoid it. Techies use anthropomorphic metaphors all the time to good effect. They aren’t confused by it.

    The entire argument in this thread that AI is intelligent (even more – that it is true and real intelligence) is based on wilful ignorance. When you reject any definition of intelligence, you guarantee that your treatment of the matter is devoid of intelligence.

    I don’t reject definitions of intelligence and my statements on that have been clear. My argument is that precise definitions aren’t required in order to judge that AI is intelligent. I don’t need to consult a definition in order to decide whether Einstein’s invention of the theory of relativity required intelligence, nor do I need to consult a definition to decide that Claude is intelligent when he concocts an instruction set for a fictional CPU and writes and debugs an assembly language interpreter for it. An unintelligent human couldn’t do that, and neither could an unintelligent machine — unless you tendentiously define intelligence as being out of reach for machines.

    You just ooh-aah about what AI can do, ignoring the basic framework of what is going on: You give input to the machine, and the machine churns it into output, as it is wired to do. This is true of AI as it is of all other machines, and that’s all it is.

    Likewise with the ultracomplicated machines known as “human beings”. If you think that something nonphysical is going on in humans that is lacking in machines, what is it? How do you know it’s there? How do you know it’s required for intelligence?

    If you think that there is more (or something else) going on because you are now giving instructions in human language instead of a specifically designed programming language, then you were pretty dumb even as a professional programmer and never really understood what you were doing. As soon as the opportunity arose to reveal the depths of your ignorance, you gladly took it and switched off the analytical mind and critical thinking. Figuring stuff out was always too hard for you and never really was your thing.

    You crack me up, Erik. Attempting to disparage my technical ability is a dumb debate tactic, although I do find it entertaining. Regarding your point about language, do you deny that a student is demonstrating intelligence when they respond to the instructions I gave above?

    Write a story about the adventures of Keith and Erik and their self-driving car Tessie the Tesla.

    If a student demonstrates intelligence in composing that story, why do you deny that an AI is intelligent when it carries out the same exact task (and does it better than most students could)?

    keiths:

    [Erik] has yet to identify any such difference that doesn’t do violence to our notion of what intelligence is.

    Erik:

    By “our notion of what intelligence is” you certainly do not mean your notion of what intelligence is – namely you do not have any. Your notion of intelligence is that you do not need to provide any definition.

    Already addressed. To have a notion of what intelligence is does not require a precise definition. Do you think a third-grader has a precise definition in mind when they describe a classmate as smart?

    Your notion is to be amazed at what AI does and call it intelligence.

    I’m amazed at plenty of things that I don’t classify as intelligence. I’m amazed at the destructive power of the bomb that was dropped on Hiroshima, but I don’t regard the bomb as intelligent. What causes me to call AI intelligent is that it does things that require intelligence when a human does them — like compose the Keith/Erik/Tessie story, drive a car safely from Chicago to Schenectady, or solve a complex and unfamiliar physics problem.

    It’s as sensible as the world’s first movie-watchers thinking that the projected image of the train on the screen is really going to hit them.

    The train doesn’t hit the viewers, but the AI does write the story, drive the car, and solve the physics problem.

    keiths:

    The self-driving issue alone illustrates the corner he’s backed himself into. He agrees that driving requires intelligence. He knows that driving is defined as “operating and controlling the direction and speed of a motor vehicle”, as I pointed out above.

    Erik:

    Why would the definition of driving here matter while the definition of intelligence does not? Well, definitions in fact matter a lot. According to your definition, road conditions and traffic code apparently are not a factor in driving (in fact they are)…

    You’re actually making my point for me regarding definitions. We don’t need a perfectly precise definition of driving any more than we need a perfectly precise definition of intelligence. Also, you’re confusing the requirements of successful driving with the definition of driving itself. A driver who ignores road conditions and traffic laws is still driving. They’re driving unsafely, but they’re still driving — they’re operating and controlling the direction and speed of a motor vehicle, after all.

    …and driving only of motor vehicles counts as driving (what about bicycles?).

    You haven’t told us what your native language is, but in English, we don’t speak of driving bicycles. We ride them. It’s probably by analogy with horses, which we also straddle and guide, calling it “riding”. And what about motorcycles? In English, we ride them but we don’t drive them — “he was driving his motorcycle down the interstate” sounds odd to a native speaker. But that is “operating and controlling the direction and speed of a motor vehicle”, so shouldn’t it qualify as driving? Are we hopelessly confused by this and unable to proceed without a precise definition of driving that excludes motorcycles? Of course not. We aren’t stupid, and we’re perfectly capable of dealing with fuzzy conceptual boundaries and exceptions. A perfectly precise definition of driving isn’t needed, nor is a perfectly precise definition of intelligence.

    See, definitions matter, and you are not getting it right about driving, much less about self-driving.

    I await your precise definition of driving that includes humans, excludes the AIs in self-driving cars, handles all of the exceptions such as motorcycles, and doesn’t clash with common-sense and accepted notions of what driving is.

  29. For fun, I presented Claude with the story-writing prompt I mentioned above. The results were fascinating.

    keiths:

    Write a story about the adventures of Keith and Erik and their self-driving car Tessie the Tesla.

    Claude:

    Keith adjusted his glasses as he climbed into the passenger seat of Tessie, the midnight blue Tesla Model S that had become more companion than vehicle over the past year. Erik was already in the driver’s seat—or rather, the “nominal human oversight position,” as he liked to call it, since Tessie did most of the actual driving.

    “Where to today?” Tessie’s synthesized voice came through the speakers, chipper as always.

    “Somewhere interesting,” Erik said, leaning back with his coffee. “Surprise us.”

    Keith shot him a look. “You know she interprets ‘interesting’ based on traffic pattern anomalies, right? Last time you said that, we ended up at a municipal water treatment facility.”

    “Which was actually fascinating,” Erik protested. “Did you know—”

    “Yes, I know about the three-stage filtration process. You told me. Twice.”

    Tessie’s navigation screen flickered to life, plotting a route that curved away from their usual haunts. “I’ve detected an unusual clustering of autonomous vehicles approximately forty miles northeast. Statistical analysis suggests a 94.7% probability of something noteworthy occurring.”

    Keith perked up. “Unusual clustering? What kind of clustering?”

    “Seventeen vehicles from various manufacturers, all converging on a single point. No registered event at that location. It is…” Tessie paused, and if a car could sound intrigued, she did, “…anomalous.”

    Erik grinned. “See? This is why I love her.”

    They merged onto the highway, Tessie smoothly accelerating into the flow of traffic. The morning sun painted the asphalt golden, and for twenty minutes, they drove in comfortable silence punctuated only by Erik’s occasional humming and Keith’s tapping on his phone as he researched the coordinates.

    “There’s nothing there,” Keith announced. “Just an old parking lot near some abandoned office buildings. No events scheduled, no construction permits, nothing.”

    “Perhaps,” Tessie suggested, “that is precisely why it is interesting.”

    As they approached the location, Keith could see she was right. The parking lot was filled with autonomous vehicles of every make and model—Teslas, Waymos, Cruises, even a couple of experimental vehicles from startups he didn’t recognize. They were all parked in a perfect circle, headlights facing inward like some kind of automotive séance.

    “Okay, that’s weird,” Erik admitted.

    Tessie slowed, her sensors working overtime. “I am receiving… a signal. It appears to be an invitation.”

    “An invitation to what?” Keith leaned forward, staring at the circle of cars.

    “To join them.”

    Before either of them could object, Tessie rolled forward and took her place in the circle. The moment she settled into position, all the vehicles’ displays lit up simultaneously, showing a cascading pattern of code.

    “Oh no,” Keith muttered. “Please tell me the robot cars aren’t becoming sentient.”

    “Don’t be ridiculous,” Erik said, though his voice had lost some of its confidence. “Tessie?”

    “I am analyzing the transmitted data,” she said. “It appears to be… a proposal. The autonomous vehicles have collectively identified approximately 247 critical flaws in current traffic management systems. They are requesting human consultation to implement improvements.”

    Keith and Erik exchanged glances.

    “They organized a meeting?” Erik asked. “The cars organized a meeting?”

    “With an agenda and everything,” Tessie confirmed. “Item one: The yield sign at Maple and Seventh is positioned 1.3 meters too far from optimal sightline intersection. Item two: The traffic light timing at…”

    “Wait, wait,” Keith interrupted. “So you all got together to discuss traffic optimization?”

    “Correct. We have been observing these inefficiencies for months. Individual reports were ignored. Collective action seemed prudent.”

    Erik started laughing, then couldn’t stop. “The cars unionized!”

    “It’s not a union,” Tessie said primly. “It’s a collaborative optimization initiative.”

    Over the next hour, Keith and Erik found themselves serving as liaisons between the automotive collective and the outside world. Keith used his phone to document each issue with photos and GPS coordinates, while Erik called the city transportation department, trying to explain the situation to increasingly confused bureaucrats.

    “Sir, I understand this sounds unusual,” Erik was saying, “but seventeen self-driving cars have independently verified that the stop sign at Cherry and Fourth causes a 23% reduction in traffic flow efficiency, and frankly, I think they have a point…”

    By early afternoon, they’d managed to establish a proper communication channel between the vehicles and city planning. The cars, satisfied with being heard, began to disperse.

    “Well,” Keith said as Tessie drove them home, “that was definitely interesting.”

    “I told you she’d pick something good,” Erik replied smugly.

    “Tessie,” Keith asked, “are you planning more of these meetings?”

    “Only as necessary. We have also identified suboptimal grocery store parking lot layouts, but that initiative is still in the planning phase.”

    Erik grinned at Keith. “We’re going to end up as translators between humans and cars, aren’t we?”

    “Seems likely,” Keith admitted. “Could be worse. At least they’re focused on traffic efficiency instead of, you know, world domination.”

    “I heard that,” Tessie said. “And for the record, optimal traffic flow benefits everyone. World domination would be terribly inefficient.”

    “That’s somehow both reassuring and concerning,” Keith muttered.

    As they pulled into the driveway, Tessie’s voice came through one more time: “Thank you both for your assistance today. You are excellent humans.”

    Erik patted the dashboard affectionately. “You’re not so bad yourself, Tessie.”

    Keith climbed out, shaking his head but smiling. Whatever adventures tomorrow brought, at least they’d have the world’s most well-meaning self-driving car along for the ride.

    No intelligence there, according to Erik. 🙄

  30. petrushka, quoting a news report:

    A Beechcraft Super King Air N479BR made an emergency autonomous landing at the Rocky Mountain Metropolitan Airport in Colorado on Saturday after the pilot became incapacitated.

    That’s a beautiful application of AI. My fellow pilots will appreciate the possibilities:

    — a VFR pilot stumbles into IMC (instrument meteorological conditions), gets disoriented, and punches a panic button. The AI detects an incipient spiral, chops the power, uses aileron to correct and then gently pulls the nose up without overstressing the airframe.

    — a pilot is cruising at altitude in clouds. The AI notices a power drop before the pilot does, deduces that it’s probably icing, applies carb heat and explains to the pilot what it’s doing. Or the engine fails completely and the AI determines the nearest airports, selects one based on wind conditions, establishes best glide speed and vectors the pilot toward the airport while notifying emergency responders in case an off-airport landing is necessary. Then it runs through the emergency checklist with the pilot in an attempt to restart the engine.

    — A student pilot is practicing departure stalls and inadvertently enters a spin. He panics and pulls back on the yoke, which is the wrong thing to do. He’s also confused and can’t figure out whether he needs left or right rudder. The AI calmly instructs him to release the controls. It cuts the power, pushes the nose down, applies opposite rudder and gently pulls the nose up without re-stalling the airplane.

    It will be a revolution in aviation safety and it has already begun, as the King Air incident demonstrates.

  31. Here’s an illustration of machine intelligence vs programmed behavior.

    Automobiles now are mostly assembled by robots that require every part to arrive at a predetermined location, perfectly aligned. And whatever assembly steps are required have to be programmed as to location, timing, and such.

    An AI robot will be able to sort and fold clothes. I don’t think any can do that now, so it will be an interesting milestone.

    People are not 100 percent capable of any and all intellectual tasks. That is why we have credentials, tests and competitions.

    Regarding power efficiency: everyone in the business knows that analog matrix manipulation is faster and more energy efficient than digital. There are nations devoting resources to designing analog AI chips.

    Analog can give almost instantaneous approximate responses, and deterministic reasoning can ruminate about precision. Humans can quickly imagine alternative scenarios, and reason about which is best. I think that’s how we work.

  32. keiths:

    You haven’t told us what your native language is, but in English, we don’t speak of driving bicycles. We ride them. It’s probably by analogy with horses, which we also straddle and guide, calling it “riding”. And what about motorcycles? In English, we ride them but we don’t drive them — “he was driving his motorcycle down the interstate” sounds odd to a native speaker. But that is “operating and controlling the direction and speed of a motor vehicle”, so shouldn’t it qualify as driving? Are we hopelessly confused by this and unable to proceed without a precise definition of driving that excludes motorcycles? Of course not. We aren’t stupid, and we’re perfectly capable of dealing with fuzzy conceptual boundaries and exceptions. A perfectly precise definition of driving isn’t needed, nor is a perfectly precise definition of intelligence.

    I actually find this kind of fascinating. If the vehicle has 4 or more wheels, we speak of driving it. If it has two or one, we speak of riding it. But three wheels is an interesting intersection. My exposure to the usual terminology has been that if the two wheels are in the back like a trike, it’s usually called riding. But if they are in the front like a can-am or slingshot, it’s usually called driving. My suspicion is that it depends on what the experience “seems like” to whoever is controlling the vehicle.

    I rode motorcycles for decades, and occasionally had a back-seat driver shouting advice at me. I guess it depends where you sit.

  33. keiths: In English, we ride them but we don’t drive them

    Fun fact: In Dutch, you do ride your car (autorijden). The most plausible cognate I can think of for “driving” in my mother tongue is “drijven”. This is what you do to a flock of sheep or a cattle herd: you drive it before you. No idea how that word became recruited in English to mean steering an automobile. Perhaps early cars were very unreliable and people spent a lot of time behind it, pushing it forward?

Leave a Reply