I find the logic of keiths’s recent post, Is AI really intelligent?, refreshingly persuasive. I was particularly impressed by his examples showing that the AI assistant Claude (I presume he’s referring to either version 3.7 or version 4) possesses an “extended thinking” or “thinking mode” capability that allows users to view the model’s reasoning process in a dedicated “Thinking” section or window in the user interface. Keiths referred to this capability as a “thought process window.” He even cited AI thought processes showing that AIs are able to reflect on their condition and understand their own limitations. I think it’s fair to describe something which not only generates output that typically requires intelligence, but also does so as a result of a reasoning process, as intelligent.
Nevertheless, I have to say I disagree with keiths’s argument addressed to Erik: “If you want to argue that machines aren’t and never can be intelligent, then you need to explain how human machines managed to do the impossible and become intelligent themselves.” For one thing, keiths’s definition of a machine as something “made up of physical parts operating according to physical law” is far too broad: a rock would qualify as a machine, under this definition. And while human body parts can be viewed as machines, the human brain differs in many important respects from a computer.
For me, however, the question that really matters is: will AI ever be capable of AGI? That is, will AI ever be able to apply its intelligence to solve any intellectual task a human being can? Personally, I doubt it, for two reasons.
First, there’s good reason to believe that human general intelligence is a product of the evolution of the human brain. (I’m sure that keiths would agree with me on this point.) If it turns out that there are profound dissimilarities between brains and computers, then we no longer have reason to think that making computers faster or more powerful will render them capable of artificial general intelligence.
Second, any enhancements to AI appear to necessarily involve the addition of particular abilities to its already impressive ensemble. This strikes me as a futile process: no collection of particular capacities will ever amount to a general ability. Or perhaps AGI believers are really HGI (human general intelligence) disbelievers? Do they think that human intelligence is merely a finite collection of domain-specific intelligences, as asserted by proponents of the “modularity of mind” thesis?
However, I imagine that many of my readers will be inclined to defend the possibility of building an AGI. If so, I’d like to hear why. Over to you.
vjtorley:
They’re up to version 4.5 now: Sonnet 4.5 (the faster, cheaper, general-purpose model) and Opus 4.5 (the slower, more expensive, deeper-thinking model).
Erik, in the other thread:
keiths:
vjtorley:
Point taken. I should have phrased it like this:
vjtorley:
True. Computers are inherently algorithmic and symbolic while brains are not, and attempts at implementing intelligence algorithmically have had limited success. The real breakthroughs came once people started to use large-scale artificial neural networks. Their intelligence is a property of the networks, not of the computers on which they are implemented.
Here’s how I think about it: Brains are neural networks, and their information processing takes place at the network level. The intelligence resides in the way the neurons are interconnected and in the strength of those connections. The neurons themselves are mindless automata whose operation is based on the blind laws of physics. If you take all the neurons in a human brain but connect them randomly, they will still operate, but the resulting mess will not be intelligent.
AIs are similar in that the intelligence resides in the artificial neural networks: the way the neurons are interconnected and the strength of those connections. The difference is that unlike in brains, the operation of the neurons isn’t directly based on physics. It’s based on the operation of the underlying computer. The neurons are virtual, and the computer is taking on the role of physics. It’s basically emulating the laws of physics so that the neurons act the way they would if they were physical neurons rather than virtual ones. The computer itself operates according to the laws of physics, of course, but the computer is now an intermediate layer between the true laws of physics and the virtual laws of physics governing the operation of the virtual neurons.
One promising approach is to eliminate that layer by building AIs using physical artificial neurons instead of virtual ones. These physical neurons are analog circuits whose operation depends directly on physics, not on underlying computer hardware. AIs based on physical neurons have the potential to be orders of magnitude faster than current AIs while using far less energy.
My point is that it doesn’t matter that the computer itself is unlike a brain, because the analogy isn’t between brains and computers — it’s between biological neural networks and artificial ones. The computer is one layer below.
Yes, we agree on that. However, I don’t see any reason why intelligence has to evolve. In the other thread, I commented:
Evolution did us the favor of inventing neural networks, but we have successfully borrowed that concept and transplanted it into the world of non-biological machines. In a sense, AIs are our offspring and they benefit from our evolutionary history.
vjtorley:
I’m more optimistic because the analogy is between artificial and biological neural networks, not between brains and computers, and I don’t see any profound dissimilarities between the two kinds of neural network.
For the most part, addtional AI capabilities emerge rather than being designed and added. No one designed a story-writing module, for instance. AIs learn to write stories by seeing examples of it in their training data, much like humans do. The models are tweaked in order to improve their performance in certain domains, but that’s quite different from actually designing and installing those capabilities explicitly.
What’s striking is that the designers themselves typically have no idea how their AIs do what they do. How does an AI distinguish Ryan Gosling from Ryan Reynolds by looking at their faces, for example? No one knows, and for that matter no one knows how we do it, either. In both cases, the network implicitly learns to do it by being exposed to samples and building up correlations that are encoded in the synaptic strengths.
Since new abilities mostly emerge rather than being explicitly added, I’m pretty confident that AGI is possible. There are reasons to think that LLMs won’t get us there, but that other neural network-based architectures will. The bottom line, as I pointed out to Erik: if human neural networks have achieved AGI, why shouldn’t artificial neural networks? What’s the missing ingredient?
Do you think AIs will develop the independent capacity to increase their own abilities, either through improved hardware or improved network connectivity or both? At some point, do you think this will amount to conscious self-awareness (as far as we will be able to tell)?
Science fiction is full of computers which become so capable that they can design and improve themselves, becoming far beyond human comprehension very quickly. Inevitably, they develop personalities and preferences and get oriented toward a wide variety of goals, some of which are in conflict with human goals. Do you think this is at all plausible?