Is AI really intelligent?

I think a thread on this topic will be interesting. My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence. That sounds circular, and in one sense it is. In another sense it isn’t. It’s a way of saying that we don’t have to examine the internal workings of a system to decide that it’s intelligent. Behavior alone is sufficient to make that determination. Intelligence is as intelligence does.

You might ask how I can judge intelligence in a system if I haven’t defined what intelligence actually is. My answer is that we already judge intelligence in humans and animals without a precise definition, so why should it be any different for machines? There are lots of concepts for which we don’t have precise definitions, yet we’re able to discuss them coherently. They’re the “I know it when I see it” concepts. I regard intelligence as one of those. The boundaries might be fuzzy, but we’re able to confidently say that some activities require intelligence (inventing the calculus) and others don’t (breathing).

I know that some readers will disagree with my functionalist view of intelligence, and that’s good. It should make for an interesting discussion.

102 thoughts on “Is AI really intelligent?

  1. keiths:

    If you ask an AI to write a story, it produces a story. It’s a real story, with characters, a plot, and a resolution. If you show it to someone without telling them that an AI produced it, they’ll describe it as a story. It’s a real story, but according to you, the AI is only simulating the process of writing. If so, why does a real story get produced? How can a fake story-writing process produce real stories? If the process is fake, why aren’t the stories fake?

    Erik:

    What you need to know is what other stories its database.

    No, you don’t. I already had Gemini generate a story for you that clearly is not in its training dataset (or anywhere on the internet, for that matter):

    The moment Harold initialed line 32b of his annual property tax form, the paper didn’t just submit; it ignited into a shimmering, blue-green aura, violently folding itself into a perfectly symmetrical, eight-dimensional tesseract. This bureaucratic anomaly then projected a single, deafening mathematical equation — \sum \text{ (all things) } = \text{ (zero liability)}, which immediately converted all local mass into pure, apologetic meringue, causing the entire office building to dissolve into a massive, guilt-ridden dessert. Harold realized, as he paddled a canoe fashioned from a hollowed-out spreadsheet through the rising tide of lemon filling, that the form hadn’t calculated his taxes correctly; it had simply determined that the entire concept of financial debt was a tragic misunderstanding by a bored, sentient nebula.

    Erik:

    You have gotten around to talking about plagiarism with Flint, I see. Unfortunately you have the wrong idea how plagiarism is detected and investigated. It is not at all about how human-like the text seems.

    You’re confusing plagiarism detection with AI detection. They aren’t the same. Something can be plagiarized but not AI-produced; it can be AI-produced but not plagiarized; and it can be both plagiarized and AI-produced. It can also be neither plagiarized nor AI-produced. All four combinations are possible, because the properties are orthogonal.

    Have you examined the database of your AI? Why not?

    No, because I understand how neural networks work. They don’t store their training dataset in a database. They can’t just look up everything they’ve been trained on. Anyway, it’s clear that AIs don’t merely plagiarize. See the above property tax/lemon meringue/sentient nebula story. I challenge you to go out on the internet and find the original from which Gemini was cribbing. You won’t find it, because there isn’t one.

    Likewise, I challenge you to search far and wide on the internet for an image from which this Gemini-generated image was cribbed. Good luck.

    Can you really tell a “real story” just by looking at it? College professors don’t think so.

    Huh? College professors struggle to recognize stories when looking at them? Where is this strange place in which you live, where college professors struggle to identify stories? Where I’m from, even children know the difference between stories and non-stories.

    Try it out in your strange land. Find a kid. Read them “Goldilocks and the Three Bears” and ask them if it’s a story. Then do the same with a page from the local phone book. Report your results here.

    They have to recall other stories they have heard and read along the years and verify against them in order to be sure. Why do you think you are better?

    I have no idea what you’re talking about, unless you’re saying that college professors, like everyone else, learn to recognize stories by being exposed to them. It’s the same with AIs.

  2. J-Mac:
    Why don’t you ask AI if it would take an mRNA “gene” technology jab now called vaccine to protect itself from pathogens?

    Why not continue in the ‘Antivax’ thread? You seemed to be rolling your sleeves up for a good old ding-dong, then just withered away. I addressed your point on flu – silence. I addressed the ‘definition of vaccine’ trope. Silence. But every now and then you pop up in other threads to say something vague and petulant, then disappear again.

Leave a Reply