I think a thread on this topic will be interesting. My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence. That sounds circular, and in one sense it is. In another sense it isn’t. It’s a way of saying that we don’t have to examine the internal workings of a system to decide that it’s intelligent. Behavior alone is sufficient to make that determination. Intelligence is as intelligence does.
You might ask how I can judge intelligence in a system if I haven’t defined what intelligence actually is. My answer is that we already judge intelligence in humans and animals without a precise definition, so why should it be any different for machines? There are lots of concepts for which we don’t have precise definitions, yet we’re able to discuss them coherently. They’re the “I know it when I see it” concepts. I regard intelligence as one of those. The boundaries might be fuzzy, but we’re able to confidently say that some activities require intelligence (inventing the calculus) and others don’t (breathing).
I know that some readers will disagree with my functionalist view of intelligence, and that’s good. It should make for an interesting discussion.
Recorded instances are rare.
But several things need to be said: the car can see in all directions and can quickly evaluate possible escape routes. The specific options are not programmed. The scenarios are trained, and no one can predict the actual action taken.
Another point: the cars are constantly evaluating distant objects, and in actual cases, avoid getting into desperate scenarios. There are dozens of videos of situations that could be tragic, but are avoided so smoothly that humans may not even realize the problem.
Then there are scenarios where no effective action is possible. I took a defensive driving course some years ago, and we were told to avoid head on collisions at all cost, even if it meant steering into a solid object.
Simple crash avoidance systems have been around for a while. Statistically, they are much better than humans. AI is better, and it is improving quickly.
One other thing: Tesla has been updating software frequently this year. They are able to take incidents from beta testers and distribute updates in a week or two.
I’m aware of one recent head on collision between a Tesla truck and A BMW driving on the wrong side at high speed. Only ten percent of Tesla owners have FSD, and not everyone has it activated all the time.
I noticed something interesting. If you look at the initial, fully randomized noise at the beginning of the sequence above, there happens to be a dark patch, which I’ve circled here:

Her eye ends up developing in that spot. You can tell it’s the same spot by noting the distinctive yellow squiggle that’s above it in both of these images:


That’s interesting, because knowing how diffusion models work (which I’ll explain in a future OP), I can see how it would be tempted to put a dark feature in a spot that was already dark in the original random noisy image.
Is that what’s going on here? I don’t know, but perhaps I’ll do some experiments to see if I can doctor some original pure noise images in order to coax the model into putting features at predetermined locations.
Beyond Weasel?