AI: The question that really matters

I find the logic of keiths’s recent post, Is AI really intelligent?, refreshingly persuasive. I was particularly impressed by his examples showing that the AI assistant Claude (I presume he’s referring to either version 3.7 or version 4) possesses an “extended thinking” or “thinking mode” capability that allows users to view the model’s reasoning process in a dedicated “Thinking” section or window in the user interface. Keiths referred to this capability as a “thought process window.” He even cited AI thought processes showing that AIs are able to reflect on their condition and understand their own limitations. I think it’s fair to describe something which not only generates output that typically requires intelligence, but also does so as a result of a reasoning process, as intelligent.

Nevertheless, I have to say I disagree with keiths’s argument addressed to Erik: “If you want to argue that machines aren’t and never can be intelligent, then you need to explain how human machines managed to do the impossible and become intelligent themselves.” For one thing, keiths’s definition of a machine as something “made up of physical parts operating according to physical law” is far too broad: a rock would qualify as a machine, under this definition. And while human body parts can be viewed as machines, the human brain differs in many important respects from a computer.

For me, however, the question that really matters is: will AI ever be capable of AGI? That is, will AI ever be able to apply its intelligence to solve any intellectual task a human being can? Personally, I doubt it, for two reasons.

First, there’s good reason to believe that human general intelligence is a product of the evolution of the human brain. (I’m sure that keiths would agree with me on this point.) If it turns out that there are profound dissimilarities between brains and computers, then we no longer have reason to think that making computers faster or more powerful will render them capable of artificial general intelligence.

Second, any enhancements to AI appear to necessarily involve the addition of particular abilities to its already impressive ensemble. This strikes me as a futile process: no collection of particular capacities will ever amount to a general ability. Or perhaps AGI believers are really HGI (human general intelligence) disbelievers? Do they think that human intelligence is merely a finite collection of domain-specific intelligences, as asserted by proponents of the “modularity of mind” thesis?

However, I imagine that many of my readers will be inclined to defend the possibility of building an AGI. If so, I’d like to hear why. Over to you.

Leave a Reply