I think a thread on this topic will be interesting. My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence. That sounds circular, and in one sense it is. In another sense it isn’t. It’s a way of saying that we don’t have to examine the internal workings of a system to decide that it’s intelligent. Behavior alone is sufficient to make that determination. Intelligence is as intelligence does.
You might ask how I can judge intelligence in a system if I haven’t defined what intelligence actually is. My answer is that we already judge intelligence in humans and animals without a precise definition, so why should it be any different for machines? There are lots of concepts for which we don’t have precise definitions, yet we’re able to discuss them coherently. They’re the “I know it when I see it” concepts. I regard intelligence as one of those. The boundaries might be fuzzy, but we’re able to confidently say that some activities require intelligence (inventing the calculus) and others don’t (breathing).
I know that some readers will disagree with my functionalist view of intelligence, and that’s good. It should make for an interesting discussion.
Recorded instances are rare.
But several things need to be said: the car can see in all directions and can quickly evaluate possible escape routes. The specific options are not programmed. The scenarios are trained, and no one can predict the actual action taken.
Another point: the cars are constantly evaluating distant objects, and in actual cases, avoid getting into desperate scenarios. There are dozens of videos of situations that could be tragic, but are avoided so smoothly that humans may not even realize the problem.
Then there are scenarios where no effective action is possible. I took a defensive driving course some years ago, and we were told to avoid head on collisions at all cost, even if it meant steering into a solid object.
Simple crash avoidance systems have been around for a while. Statistically, they are much better than humans. AI is better, and it is improving quickly.
One other thing: Tesla has been updating software frequently this year. They are able to take incidents from beta testers and distribute updates in a week or two.
I’m aware of one recent head on collision between a Tesla truck and A BMW driving on the wrong side at high speed. Only ten percent of Tesla owners have FSD, and not everyone has it activated all the time.
I noticed something interesting. If you look at the initial, fully randomized noise at the beginning of the sequence above, there happens to be a dark patch, which I’ve circled here:

Her eye ends up developing in that spot. You can tell it’s the same spot by noting the distinctive yellow squiggle that’s above it in both of these images:


That’s interesting, because knowing how diffusion models work (which I’ll explain in a future OP), I can see how it would be tempted to put a dark feature in a spot that was already dark in the original random noisy image.
Is that what’s going on here? I don’t know, but perhaps I’ll do some experiments to see if I can doctor some original pure noise images in order to coax the model into putting features at predetermined locations.
Beyond Weasel?
I read that the designers of the 286 were concerned that a malicious program able to get control in real mode could rewrite the interrupt and other tables, or abuse call gates, or otherwise trash (or even become) the OS. Which is why the BIOS had to go through all that exercise to test memory and support VDisk via the 8042.
I understand that you have seen protected mode OSes switch to real mode, but I’m pretty sure that that wasn’t possible on the 286. If the OS running on the 286 could “transition to real mode” the BIOS could have done it also.
Obviously there is no such difference.
There was an instance in Ukraine war when dormant AI-driven drones were transported (by unsuspecting Russians) close to several targets, then at a given moment the drones broke out of their packaging and started flying around. The moment was pre-programmed – the same moment for the entire fleet of drones. The target areas were pre-determined coordinates, and upon arrival the drones had to identify specific military airplane-like, tank-like and other such objects to detonate themselves on. This is as close as drones became to “selecting targets themselves” – frankly not at all. And if it were any other way, it would be a scandalous war crime.
It is astonishing how little both of you know on this topic. Seems like you have been through intensive unlearning courses and excelled at that.
petrushka: literally everything you say about self-driving is catastrophically wrong. You swallow market hype uncritically and you are not allowing real-life user feedback correct you.
I’m reminded of the joke of the lady watching the parade and noticing that “everyone in the whole parade is out of step except my son – and that includes the drummers!” I guess nobody but Erik can see the obvious – even those who have long professional careers in the discipline!
So we’re back to Dawkins:
In the face of this position, even Dawkins was helpless.
Flint,
If you got something to refute then why don’t you? Because you got nothing, that’s why.
keiths and petrushka have abandoned their expertise, if they ever had any in the first place. They don’t know what simulation is, they don’t know what software is, and, as it turns out, they also don’t know what hardware is. They don’t know how any of these things work, either in broad principle, technically, or legally. Do you? Can you post a fact for a change? For now, I’m the only one who cited actual facts in this thread instead of blather.
Erik:
Identifying potential targets and then picking one to go after is selecting a target.
It is astonishing to me that you can’t grasp petrushka’s simple point. Here are two scenarios that illustrate the difference:
Scenario #1:
You’re a soldier in combat. You see a squadron of enemy tanks approaching. You select a tank, point your Javelin at it, and fire. The Javelin hits the tank you selected.
Scenario #2:
You’re a soldier in combat. You launch an autonomous drone that has instructions to fly to a predetermined point and loiter. While it is loitering, a squadron of enemy tanks enters its field of view. It selects a tank, flies to it, and detonates.
In scenario #1, the soldier selected the target. In scenario #2, it was the drone that selected the target. The soldier didn’t select a target, because he didn’t know what the available targets were or would be. He was depending on the drone to select a target, which it did.
Here’s an analogy. The commander of a squadron of A-10s gets a radio call. Some ground forces are pinned down near Kandahar. The commander sends a pilot to that location to provide close air support. The pilot flies to that location, selects a target on the ground, and fires at it.
In that scenario, who selected the target? Was it the squadron commander, or the pilot? It was the pilot, obviously. The squadron commander’s role was to give instructions to the A-10 pilot. The pilot’s role was to fly to the combat zone and select and destroy targets.
The squadron commander is analogous to the soldier who launched the drone, and the pilot and his aircraft are analogous to the drone. It was the latter who selected the targets.
Erik:
You crack me up, Erik.
Are you ever going to answer my question?
You wrote:
I asked:
“No evidence, no matter how overwhelming, no matter how all-embracing, no matter how devastatingly convincing, can ever make any difference.” At first, I thought Dawkins was exaggerating. You have proved him right.
Flint:
But they made writes to the MSW privileged in protected mode. Since writes to the MSW are privileged, there is no danger of rogue programs trashing the IDT or other critical structures. The OS has full control of what code does and doesn’t get to run in real mode, and it can limit that access to trusted code such as legacy device drivers.
It’s analogous to CPL 0. The OS has control of who gets to run at that privilege level, so there’s no danger of user programs mucking with sensitive structures like the IDT, the page tables, or the hardware itself.
My point is that if you make MSW writes privileged, which is what the 286 architects did, then there is no reason to block the processor from entering real mode when PE is cleared. That’s why the 386 and beyond permit it.
Right. The 286 couldn’t do it, and that was a major architectural flaw. I’m just questioning the reason for that architectural mistake, since the architects were aware that writing to the MSW was a privileged operation. My best guess is that they didn’t think that re-entering real mode would ever be necessary, not anticipating that real mode would be needed to handle legacy device drivers and certain BIOS calls from legacy programs.
petrushka:
[A note for anyone who is unfamiliar with ‘Weasel’. Weasel was a toy program written by Richard Dawkins to demonstrate the basic evolutionary principle of random variation and natural selection. It engendered some lively discussion between us and proponents of Intelligent Design, both here and at William Dembski’s Uncommon Descent blog.]
Interesting question. There are some parallels and some disanalogies. Let me think out loud.
Targets:
Mutations:
Selection:
None of the three — Weasel, evolution, or diffusion models — do what became known in the discussion as “explicit latching”. That is, they don’t lock changes into place in order to prevent further mutations from “undoing” the beneficial ones. That was a hot topic in the Weasel discussion, because the IDers were erroneously convinced that Weasel cheated by latching, which is something that doesn’t happen in nature.
The latching business was pretty funny, so I went back and googled parts of the discussion. I think my favorite bit was when the inimitable kairosfocus, having been shown that Weasel didn’t latch, insisted that it was “implicit quasi-latching”.
Good times.
ETA: kairosfocus is still at it, using the same turgid prose we found so funny:
When you look at your example of evolving an image, consider that self driving computers do not have to be explicitly programmed. The actual process of training requires one of the largest supercomputers in existence, and the largest dataset in existence.
Over the course of twelve years, the set of scenarios has been refined. It started with billions of miles of actual driving and has been refined. I’ve read that the current training data is synthetic, not because the situations are too complex, but because actual human drivers are too sloppy.
Driving is a bit like Douglas Adams definition of flying: aim for the earth and miss.
The critical part of driving is to aim for the destination, and avoid crashing.
I think you’re pretty much correct here. Seems clear to me that the 286 architects were clueless about the POST, but I think it went well beyond. My reading (long ago, but written by one of the 286 team) was that they figured the 286 would come out of reset and the OS would take control immediately. No POST, no DOS, no legacy programs, no device drivers not written for this hypothetical OS.
This isn’t a stupid or far-fetched picture – it’s pretty much what linux does. A dedicated linux PC has only a tiny ROM that knows little more than how to load sector 0 from the disk. The sector 0 code then loads a few more sectors in real mode, enough code to build the required code, date, and interrupt descriptor tables, hooking interrupt entries to protected mode drivers, after which it’s all protected. So there are no backward compatibility issues, no legacy drivers, no stupid software tricks like we discussed earlier. My linux experience is limited to boot ROMs, so I may have the rest of this wrong…
petrushka:
I was thinking recently about an alternate universe in which we somehow didn’t know that human cognition was based on biological neural networks. Would we have stumbled upon the neural network architecture as a way of building AI, or did we absolutely need the hint from nature? Are there other ways of implementing robust machine learning that don’t depend on neural networks or something mathematically equivalent, like transformers? Where would we be today if Minsky and Papert hadn’t proven the limitations of perceptron networks, thus putting the brakes on the field, or if someone had invented back propagation (the algorithm that allows deep networks to learn) sooner?
I’ve lost track of what I’ve posted here, but I’ve been thinking about variations in human intelligence for fifty years.
In evolution, taking a path can preclude alternative paths. Humans are unlikely to develop wings.
I’ve wondered if humans taking certain paths in early learning are precluded from becoming proficient in some tasks. And vice versa. There is the somewhat disturbing possibility that biological evolution could predispose individuals to certain paths.
I’m not a big believer in “g”. I think g is academic proficiency, and our world favors that. But I’m thinking it’s possible to be born with greater or lesser propensity toward a set of skills, and life experience amplifies initial conditions.
Not unlike your image evolver amplifies variations in noise.