Is AI really intelligent?

I think a thread on this topic will be interesting. My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence. That sounds circular, and in one sense it is. In another sense it isn’t. It’s a way of saying that we don’t have to examine the internal workings of a system to decide that it’s intelligent. Behavior alone is sufficient to make that determination. Intelligence is as intelligence does.

You might ask how I can judge intelligence in a system if I haven’t defined what intelligence actually is. My answer is that we already judge intelligence in humans and animals without a precise definition, so why should it be any different for machines? There are lots of concepts for which we don’t have precise definitions, yet we’re able to discuss them coherently. They’re the “I know it when I see it” concepts. I regard intelligence as one of those. The boundaries might be fuzzy, but we’re able to confidently say that some activities require intelligence (inventing the calculus) and others don’t (breathing).

I know that some readers will disagree with my functionalist view of intelligence, and that’s good. It should make for an interesting discussion.

704 thoughts on “Is AI really intelligent?

  1. In discussing a mathematical result with Claude (OP forthcoming), I used the made-up word ‘numerize’ to describe the conversion of a predicate (which can be true or false) to a number — 1 for true, 0 for false. ‘Quantize’ is already taken, with a different meaning, so I settled on ‘numerize’. I like to play with language and it can be fun to test AI’s ability to recognize neologisms and infer their meaning.

    My prompt was

    Putting brackets around predicates is the standard way to numerize them in mathematical expressions?

    Claude immediately understood what I meant and responded appropriately. He has abstracted the idea that adding -ize to a noun or adjective creates a verb that means “to bring about X”, where X is the antecedent. This isn’t something you’d intuitively expect from a system that is fundamentally built on next-token prediction, and the fact that AI is able to do it is yet more evidence that AI is truly intelligent.

  2. The Abstraction Fallacy: Why AI Can
    Simulate But Not Instantiate Consciousness
    Alexander Lerchner
    Google DeepMind
    2026-03-19
    Computational functionalism dominates current debates on AI consciousness. This is the hypothesis that subjective experience emerges entirely from abstract causal topology, regardless of the underlying physical substrate. We argue this view fundamentally mischaracterizes how physics relates to information. We call this mistake the Abstraction Fallacy. Tracing the causal origins of abstraction reveals that symbolic computation is not an intrinsic physical process. Instead, it is a mapmaker-dependent description. It requires an active, experiencing cognitive agent to alphabetize continuous physics into a finite set of meaningful states. Consequently, we do not need a complete, finalized theory of consciousness to assess AI sentience—a demand that simply pushes the question beyond near-term resolution and deepens the Al welfare trap. What we actually need is a rigorous ontology of computation. The framework proposed here explicitly separates simulation (behavioral mimicry driven by vehicle causality) from instantiation (intrinsic physical constitution driven by content causality). Establishing this ontological boundary shows why algorithmic symbol manipulation is structurally incapable of instantiating experience. Crucially, this argument does not rely on biological exclusivity. If an artificial system were ever conscious, it would be because of its specific physical constitution, never its syntactic architecture. Ultimately, this framework offers a physically grounded refutation of computational functionalism to resolve the current uncertainty surrounding…

  3. petrushka,

    I saw that paper too. I think I’ll do an OP on it.

    This sort of thing isn’t promising…

    It requires an active, experiencing cognitive agent to alphabetize continuous physics into a finite set of meaningful states.

    …but I’m sure the thread will end up being about AI consciousness generally, not just this paper.

  4. This is not scientific, but I think consciousness begins with tropisms and evolves to support survival.

    I don’t think you can evolve consciousness without evolving layers of survival mechanisms.

    Trying to build top to bottom would be like trying to program the weights in an LLM from first principles.

Leave a Reply