I find the logic of keiths’s recent post, Is AI really intelligent?, refreshingly persuasive. I was particularly impressed by his examples showing that the AI assistant Claude (I presume he’s referring to either version 3.7 or version 4) possesses an “extended thinking” or “thinking mode” capability that allows users to view the model’s reasoning process in a dedicated “Thinking” section or window in the user interface. Keiths referred to this capability as a “thought process window.” He even cited AI thought processes showing that AIs are able to reflect on their condition and understand their own limitations. I think it’s fair to describe something which not only generates output that typically requires intelligence, but also does so as a result of a reasoning process, as intelligent.
Nevertheless, I have to say I disagree with keiths’s argument addressed to Erik: “If you want to argue that machines aren’t and never can be intelligent, then you need to explain how human machines managed to do the impossible and become intelligent themselves.” For one thing, keiths’s definition of a machine as something “made up of physical parts operating according to physical law” is far too broad: a rock would qualify as a machine, under this definition. And while human body parts can be viewed as machines, the human brain differs in many important respects from a computer.
For me, however, the question that really matters is: will AI ever be capable of AGI? That is, will AI ever be able to apply its intelligence to solve any intellectual task a human being can? Personally, I doubt it, for two reasons.
Continue reading