I think a thread on this topic will be interesting. My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence. That sounds circular, and in one sense it is. In another sense it isn’t. It’s a way of saying that we don’t have to examine the internal workings of a system to decide that it’s intelligent. Behavior alone is sufficient to make that determination. Intelligence is as intelligence does.
You might ask how I can judge intelligence in a system if I haven’t defined what intelligence actually is. My answer is that we already judge intelligence in humans and animals without a precise definition, so why should it be any different for machines? There are lots of concepts for which we don’t have precise definitions, yet we’re able to discuss them coherently. They’re the “I know it when I see it” concepts. I regard intelligence as one of those. The boundaries might be fuzzy, but we’re able to confidently say that some activities require intelligence (inventing the calculus) and others don’t (breathing).
I know that some readers will disagree with my functionalist view of intelligence, and that’s good. It should make for an interesting discussion.
I think you may have a misunderstanding about how the cars are trained. Individual cars do not learn (although they have user chosen options that may be invoked after encountered situation).
But very car records what its cameras see, and if they encounter an unexpected event, or have an accident or near miss, the recordings become part of the training data for the whole fleet.
Minor example: a recent version of FSD overreacted to leaves blowing across the road. This was reported by a number of early adopters, and was updated for everyone within a couple of weeks. Everyone having the same version of the software has the same level of experience and the same abilities.
If you fail to watch a recent video made by an actual owner in a complex scenario involving heavy traffic, bicycles, and pedestrians, you really can’t fathom how much progress has been made in the last six months.
The cars are very unlikely to have an accident, because their highest priority is avoiding hitting things and people.
But rules of road can be very complex, particularly speed limits. For example, a sigh may say “End 45 MPH”, without immediately specifying the current limit.
School zones have speed limits that are effective only during certain periods.
Signs get covered by foliage.
I just did some driving through upstate New York. My car is not self driving, but it does display the current speed limit. It was wrong for most of the trip. It is very accurate in Connecticut.
GPS routing is pretty good for choosing roads and for telling you where to turn, but terrible for telling you that you have arrived. I’ve had three instances this week where it was off by several hundred feet at the destination.
I doubt it. Training neural networks is expensive and requires a lot of compute power, so at least for now I don’t think it’s something that cars can do on their own. I do know that they’re constantly recording data (via the automotive equivalent of a flight data recorder) so that after an accident, investigators can download the data and figure out what went wrong, and engineers can modify the training dataset or make other changes to avoid the problem in the future.
Yes, people can, and that’s one of the many differences between human driving and current AI driving. It doesn’t help Erik’s case, though, because he doesn’t need evidence that human drivers can do things that AI can’t. That’s obvious. He needs evidence that the AIs don’t actually drive. Driving requires intelligence, so if he acknowledges that self-driving cars actually drive, he’s conceding that AI is intelligent. (He actually did acknowledge that Waymos drive, but he had to walk it back when I pointed out the implications.)
I think he’s SOL on the driving. The driving that AIs do matches the dictionary definition: they “operate and control the direction and speed of a motor vehicle.” It ain’t simulated. Case closed.
He’s in a difficult position. He needs to prove that nothing AIs do requires genuine intelligence. Hence his “it’s all simulated” approach. But if even one such AI capability isn’t simulated, it’s game over. We’re already there with driving. Ditto for story-writing and analogizing and solving physics problems.
When a forklift lifts a pallet, it’s genuine lifting. When an AI writes a story, it’s genuine story-writing. When a self-driving car drives, it’s genuine driving. The latter two require intelligence. AI is intelligent.
The goalposts have been moved. Ordinary intelligence has been achieved. Extraordinary intelligence has not yet been achieved.
AIs can take IQ tests and score in the range of 120. Above average, but not yet PhD level. So we shouldn’t be surprised that they glitch out on puzzles.
Yesterday, I asked my browser what to do with an Amazon package delivered to me by mistake. Before discussing its response, I will point out that the mistake was made by a human driver, and the level of complexity was low.
The AI read the request wrong and assumed my package was delivered to someone else. It persisted in this.
It turns out to be a trick question. Amazon has no procedure for this situation. The correct recipient was half a mile away, so I drove the package to their house. The law says I can simply keep it. Amazon will, no doubt, realize the package is lost and replace it. The poor browser AI has no consensus on which to base a response.
But it was unable to reason about this and provide an acknowledgment of the dilemma. One strike against high level intelligence.