I think a thread on this topic will be interesting. My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence. That sounds circular, and in one sense it is. In another sense it isn’t. It’s a way of saying that we don’t have to examine the internal workings of a system to decide that it’s intelligent. Behavior alone is sufficient to make that determination. Intelligence is as intelligence does.
You might ask how I can judge intelligence in a system if I haven’t defined what intelligence actually is. My answer is that we already judge intelligence in humans and animals without a precise definition, so why should it be any different for machines? There are lots of concepts for which we don’t have precise definitions, yet we’re able to discuss them coherently. They’re the “I know it when I see it” concepts. I regard intelligence as one of those. The boundaries might be fuzzy, but we’re able to confidently say that some activities require intelligence (inventing the calculus) and others don’t (breathing).
I know that some readers will disagree with my functionalist view of intelligence, and that’s good. It should make for an interesting discussion.
Recognizing the high level language means recognizing the library functions.
It would be possible to create an obfuscating compiler and linker.
Whether this would be useful, I don’t know.
In my experience, the first place to look is stack usage – how many stack frames are used, how the stack is used for isolating variables, etc.
With sufficient knowledge of the regular compiler you can always type well-functioning gibberish yourself. That kind of code would be gibberish only for less expert humans, but not for the compiler.
This cannot be. Reverse engineering is simply figuring out how it works. If you allegedly cannot reverse engineer it, then the question is: Does it even work in the first place? Probably not. Another likely answer is: You are too incompetent at reverse engineering.
No, it does not follow that it is hackproof. But it’s true that it is a fatal flaw, specifically because the question is: Does it even work in the first place? And how? And as long is it is allegedly “impossible to reverse engineer” it is also impossible to debug, therefore *unfixable* (except by complete replacement) – another fatal flaw. Given fatal flaws of this calibre, a reasonable business manager would not deploy such products.
Actually, AI is not motivated by either good or evil. AI is made to give you the optimal output, but even the makers of AI are not sure how to fine-tune the concept of optimal to be optimal for every given instance. It may be optimal in one sense while catastrophically detrimental in the particular real-life circumstance – and this keeps happening.
In any other real-life business area that matters, manufacturers would not be permitted to let loose on real-life people a product described as “black box” or “mystery” unknown to themselves. But with AI the hype is so strong that nobody cares about the lives and minds that are being destroyed. Nobody cares even about the surging electricity prices and other social and economic costs of data centres.
Back on topic of “Is AI really intelligent?”, a good way to approach the answer (assuming anybody is interested in the answer, which is clearly the wrong assumption as I know by now) is to identify the anthropomorphising terms in the discussion/description, strictly replace them with purely technical terms, and then read the result. This approach is appropriate given what AI is – it is software.
For example from the OP, “You might ask how I can judge intelligence in a system if I haven’t defined what intelligence actually is. My answer is that we already judge intelligence in humans and animals without a precise definition, so why should it be any different for machines?” So, the OP warmly recommends anthropomorphising the AI first, which is exactly one of the observed catastrophic category errors in AI output itself, e.g. when AI invents chess moves out of thin air, something that must not be done in chess. As long as you are sloppy like that, you have not even begun to figure out whether AI is intelligent.
Also, the OP has no problem with illogic when it supports the preference, such as “My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence. That sounds circular, and in one sense it is.” This is completely uncritical and unanalytical. After properly de-anthropomorhising the description of AI’s behaviour, it remains to be proven whether AI “can do things” in the relevant sense at all. It certainly gives output when prompted, but what does it do, if anything, when unprompted? Does it do anything of itself? Obviously, it does not, so in the name of being scientifically analytical, let’s refrain from saying that AI ‘does’ or ‘behaves’ or ‘thinks’ or ‘creates’ when it does not.
Moreover, since we are not operating on any agreed definition of intelligence, it remains to be proven to what extent or in what sense intelligence is behaviour and whether behaviour is a necessary aspect of intelligence.
Philosophy can decide whether something can be useful even before it is deployed, depending on the ‘something’ in question. For example, from the point of view of philosophy it would be a very bad idea to test whether a nuclear explosion in the middle of Manhattan might become useful or inspire competition. From some other point of view, be it time or whatever, you might give it a go or you might not care whether anybody gives it a go…
As you obviously (don’t) know, the “self-driving” car only drives on the territory that has been mapped into it. Let it drive outside the mapped territory and a catastrophe would ensue, which is why it is hard-wired to drive specifically only on the mapped territory.
Good start. For consistency’s sake, do the same for intelligence and every other anthropomorphism you have attributed to AI. Based on what you said, give a second consideration to whether AI really “chooses” and whether it properly has goals that are “its”.
I’m glad to discover that philosophy can tell us that nuking Manhattan is a bad idea.
My kids and grandkids live there.
As to self driving cars only going on mapped roads, you need to define mapped. Most roads have been mapped by Google, Apple, TomTom, and such. GPS mapping is not precise to steer a car.
Mapping does not help with pedestrians, traffic, temporary construction, animal crossing, and such. Autonomous cars are doing just fine with those. Even in their current state, they are far safer than human drivers, and approaching the safety of the best human drivers.
The latest Tesla software can navigate unpaved roads and unmapped driveways. Can park in your garage, or in multi story parking garages that are not GPS mapped.
The biggest current challenge is not avoiding crashes. It is navigation. Google maps and the like.
My car GPS has my house misplaced by 500 feet.
This kind of error affects humans also, but admittedly, in different ways.
Map problems will be ironed out in the next few years. Crash avoidance is solved.
Yes, definitions are important. Therefore let’s define self-driving: It’s a car driving around without a human driver. It’s the way Waymo does it.
Tesla’s Full Self-Driving is not self-driving. It’s Full BS. It’s closer to autopilot on airplanes – of some help to pilots, but not a replacement of them.
Right now the corporate bosses believe (falsely) that AI permits nearly all workers to be fired. If they used AI to arrive at this conclusion (and clearly they did), then they have already replaced themselves by AI and they can be fired first.
Waymo had safety drivers for four years. Their only fatal accident occurred with a safety driver.
Meanwhile I remembered food delivery robots. Those are also self-driving.
And it so happens that around the office building where I work, self-driving little buses (free for passengers) were deployed for one summer across small sections of just two streets. One bus ended up in a congestion at a street crossing that only it could have solved (had it been intelligent) by backing out of the situation, but it had no concept of going in reverse. The passengers got off the bus and an impatient truck driver intentionally crashed into the thing, pushing it out of the way. Then the buses were discontinued, because they apparently need some more work. Guess who is working on them. Themselves?
Erik, earlier you wrote:
You then contradict yourself:
Thank you for confirming it. That’s real driving, Erik, and Waymos do it. If the driving that Waymos do is real driving, then why is the story-writing that AIs do only simulated writing?
You keep avoiding that question, which suggests that you know the answer: self-driving cars really drive, and AIs really write stories. Story-writing requires intelligence, AIs write stories, therefore AIs exhibit intelligence.
Waymo’s are not perfect, but they exist, and while there are people trying to discredit them, I’ve only seen evidence of one accident, and it would be exaggerating to call it a fender bender.
Come back in six months to talk about FSD.
The problem is not safe driving, but living up to a nearly impossible standard. FSD had been better than average human drivers for some time. But human drivers are not particularly safe.
Waymos are not perfect, but they exist, and while there are people trying to discredit them, I’ve only seen evidence of one accident, and it would be exaggerating to call it a fender bender.
Come back in six months to talk about FSD.
The problem is not safe driving, but living up to a nearly impossible standard. FSD has been better than average human drivers for some time. But human drivers are not particularly safe.
You are an avid fan of the 2nd Amendment, amiright? Ready to give guns to AI and let it loose on the streets?
I have seen self-driving cars and food delivery robots in person. The method by which they self-drive makes all the difference. It is simulated.
You have avoided defining simulation and never addressed any of my questions related to it. Probably because you do not know what you are talking about, as is clear from everything else you have said on this topic.
petrushka:
Right. Erik keeps falling into the same trap: he’ll point to some imperfection in an AI’s performance as evidence that it isn’t intelligent, not realizing that by his standard, humans aren’t intelligent either. He also keeps making unwarranted extrapolations: If a particular AI isn’t intelligent, then none of them are. If there’s something that AIs can’t currently do, then AIs will never be able to do it.
Autonomous weapons have been around for a long time. Mostly, humans chose the targets, and the guidance system simply tracked the target. But recently there have been loitering drones that select their targets. So someone thinks it’s a good idea.
This is somewhat different from autonomous cars that are designed to avoid doing harm.
My own car has a rather primitive accident avoidance system. It is quite in reverse. It is paranoid about people in the way. It knows whether people are moving into the path of the car, or moving out of the path.
It notices if I am closing too fast on the car ahead, and will hit the brakes. This is especially useful when entering expressways, because the people ahead sometimes chicken out and stop, while you are looking back.
These simple systems are said to prevent 85 percent of accidents. My insurance premiums are lower than they were ten years ago.
FSD can do all this, plus steer in heavy traffic, avoid animals, bicycles, pedestrians, and navigate to your destination, and park. The consensus seems to be it requires intervention about once in a thousand miles. The goal is one in a hundred thousand miles, and no interventions required to prevent serious accidents.
No, this is your misconception. Final proof that you have no clue what you are talking about.
False. I am familiar with Ukraine war in minute detail. AI-driven drones are trained to select either specific targets (by coordinates) or target things shaped like known war machinery, such as specific airplane models, tanks and the like. Drones that select their targets in the blind or in abstract are still not a thing. Soon they may become a thing, and it would be a bad idea, about as bad as a nuclear explosion in the middle of Manhattan, certainly a worse idea than land mines because they would be moving/flying land mines.
Erik:
If an AI guides a car from point A to point B, that’s real driving. You confirmed it yourself:
Driving around without a human driver is driving. Real driving. Writing a story without human assistance is story-writing. Real story-writing.
Your admission was an own goal, so now (to mix sports metaphors) you are moving the goalposts. It’s pretty clear where you’re headed: if self-driving cars don’t drive in exactly the way humans do, then their driving is fake. That leads to the ridiculous conclusion that guiding a car from Newark to Philadelphia is sometimes real driving, sometimes fake, even though the result is the same: the car ends up in Philadelphia.
And if self-driving cars did drive in exactly the way that humans do, you’d move the goalposts yet again. You’d resurrect your “AIs don’t defecate, therefore they’re not intelligent” argument.
You want AIs to be unintelligent, and you’re hunting for reasons to claim that they are. It’s an assumed conclusion, but you’re failing to find a chain of reasoning that leads to it.
Legally and socially, the problem *is* safe driving, because as long as it is not safe you need to blame somebody for it. When people cause accidents, you can blame them legally, but who is this AI that you can blame it for driving like a moron, for parking falsely etc?
So the more fundamental issue is: Who really did it? As soon as it is safe, as soon as AI does everything correctly according to our wishes, the question becomes: Who is AI and who should we treat it as? I’m already far ahead of you and raised this question pages ago – and answered it too. But you are not there yet, so let it be.
keiths, Just for the giggles, can you undo your circular reasoning that you gleefully engage in in the OP, stop assuming your conclusion and start talking sense?
Edit: Task #1: Define simulation.
You do realize that there’s a difference between a human selecting a target and the missile tracking the target, and a human releasing a drone that can loiter until it finds a target?
Well, if I understand correctly, you mean a difference between a human releasing a missile on a selected target and a human releasing a drone on a target.
Nope. There is only a technical difference between a missile and an AI-driven drone. It ends up being a tactical difference in warfare, but no difference from the human point of view. What difference do you have in mind?
The legal status of autonomous vehicles is undecided. Statistically unavoidable harm is handled with insurance. Negligent harm is tricky, and results in headlines. The stated goal for autonomous vehicles is to have one-tenth the accident rate of human drivers.
In five billion miles on Autopilot, there have been 14 deaths and 50 citations. Autopilot is primitive compared to FSD.
Waymo is a taxi company. Its cars are driverless taxis. This is not an undecided legal status. Similarly, all AI products, hardware and software, will have to be decided. “Intelligence” will very likely never be part of it (or it can happen, if AI lobby is powerful enough, but it would be the wrong thing to do). Trademarking and copyright and patents already are, and should suffice.
There is one more nuance to self driving.
Taxis do not do the last 50 feet of driving. They do not have to park in your garage. They do not have to find a parking slot at Costco.
They pick up riders on the street and proceed to the best available drop off point at the destination. Robotaxis are designed without steering wheels. Eventually they will be available for purchase by disabled people, even legally blind people.
This is much further along than you think. It could happen next year.
According to Elon Musk? According to Elon Musk, he flew to Mars last year.
Anyway, there are answers to your nuance. It does not raise anything new.
There are 10,000 people using FSD every day. Hundreds of them post drive videos every day. There are many glitches. The glitches are mostly being fixed within days.
That is the strength of the system.
To prevent misunderstanding, a glitch is an annoyance or an action prompting intervention. Glitches are not safety issues.
You seem to be going by Musk’s definitions. Glitches of a so-called self-driving software are OF COURSE SAFETY ISSUES, WITH LOUD SCREAMING OBVIOUSNESS. I have seen some videos of the said glitches.
Anyway, you have stopped adding anything new to the matter.
Do you have a link to a safety issue, something from the last month or two?
Or at least a description?
Why ask me? You could just ask Grok.
Anyway, easy peasy https://electrek.co/2025/10/29/tesla-full-self-driving-v14-disappoints-with-hallucinations-brake-stabbing-speeding/
petrushka:
Erik:
No, he’s talking about the rather obvious difference between a) a human selecting the target and b) the drone selecting the target.
Flint:
Processor designers hate crap like that. I worked mostly on X86 processors during the first part of my career, which meant maintaining software compatibility all the way back to the Stone Age of the original IBM PC. In those early days, in both PCs and embedded systems, memory was a scarce resource and developers would resort to tricks like that in order to cram their code into the available space. Fast-forward decades and tricks like that are no longer needed, but they’re still out there in the code base and compatibility must be maintained, so we’re forced to add kludgy hardware to handle them. Examples:
1. The thing you described about jumping into the middle of an instruction can be a pain for a processor that does predecoding. X86 instructions are variable-length, so in a processor that’s decoding more than one instruction per cycle, correctly lining instructions up with the decoders is a critical timing path. For example, if you’re decoding three instructions per cycle, you have to figure out how long instructions 1 and 2 are in order to mux the correct bytes to the third decoder, and that takes precious picoseconds. One way of dealing with it is to partially decode the instructions when they’re fetched from memory and mark the instruction boundaries in the instruction cache so that they don’t have to be determined at decode time. Those marked boundaries are incorrect if you jump into the middle of an instruction. You have to detect that in hardware, invalidate the cache line, and refetch the code, adding complexity and wasting silicon just to handle a coding style that practically no one uses.
2. Self-modifying code. In the old days, “clever” programmers would sometimes modify instructions right before executing them. It worked on old, slow processors, but it would break on newer ones that have all sorts of hardware accelerators built into them, were it not for the fact that poor processor designers are forced to guarantee backward compatibility.
To make it work, you have to a) snoop the I-cache on writes and invalidate matching lines, b) invalidate any matching entries in relevant structures such as branch target buffers, c) detect and invalidate any matching instructions in flight in the entire instruction pipeline, and d) even potentially issue aborts to undo the speculative execution of the affected instructions. Ugh.
3. Software delay loops. A lot of old code implements delays using software loops instead of relying on external time references. Delays that were sufficient on old processors are insufficient on new ones that execute the same code blazingly fast. Those issues are a bitch to debug, and there’s no way to address them in hardware because the whole point is to make the processor fast, not slow. So you have to explain to the customer that their software (which they’ve been using reliably for years) is failing now not because of a processor bug, but because of a bug that’s been lying dormant in the software for decades, waiting for a fast processor to come along and sensitize it.
4. Reliance on undocumented behavior. Example: the spec says that instruction XYZ leaves the Q flag undefined, but you can count on some coder having noticed that the Q flag always behaves in a certain way, so they write their code to depend on that behavior. Future processor designer comes along and doesn’t worry about the Q flag after XYZ because the spec says it doesn’t matter. Software breaks, designer spends days or weeks tracking the failure down only to discover that the *%^&^%$! coder didn’t follow the spec.
4. Segmentation.This isn’t a coding trick per se, but there’s old software that depends on X86 segmentation which nobody uses any more. Segments are always flat now. Supporting segmentation adds a bunch of complexity to the design for very little gain.
That’s a special case of #2 above, where the processor in question doesn’t properly handle self-modifying code. The problem with tricks like that is that the developer can only check for quirks that they already know about, and they don’t know what the quirks will be (if any) on future compatible processors. Hence the introduction of the CPUID instruction.
Fun for the coders, irritating to the processor designers. You’re giving me flashbacks, lol.
petrushka:
Erik:
You need knowledge of the language, not of the compiler, and typing “well-functioning gibberish” would be a complete waste of time for a developer (as well as being bug-prone). If you want to obfuscate your code, you use an obfuscator. You don’t do it by hand.
Obfuscated code is gibberish even to experts, but never to compilers, which robotically follow the language spec and therefore aren’t confused.
Here’s an obfuscated program in Python:
Do you think an expert could just sit down and tell you what that program does? Experts are not human compilers.
It prints “Hello, World!”, believe it or not.
Erik:
Um, no. The OP suggests that we should judge intelligence based on performance, and you don’t have to anthromorphize something in order to do that. Excavators are good at digging, and we can confidently state that without anthropomorphizing them.
Incorrect. LLMs require prompts not by nature but by design. Their developers don’t want them wasting compute time and electricity doing things that no one has asked them to do. I’ve already given an example of AIs doing things on their own, unprompted, when they are introduced into the virtual environment of a video game and learn to play the game well by trying things and observing the results.
Intelligence is not behavior, but it manifests itself in behavior. The behavior of a self-driving car is to get from point A to point B safely and efficiently. The behavior of an LLM when asked to write a story is to write a story.
keiths:
False, as noted above. It can drive just fine in unmapped territory, though it may not know where the roads lead. But kudos to you for acknowledging that it drives in the territory, not in the map! That’s progress.
The driving occurs in the territory, and the car ends up in Philadelphia. It’s real driving. The story-writing occurs in the territory, and the story ends up in the territory too. It’s real story-writing.
What catastrophe? As petrushka points out, nothing terrible happens. The car doesn’t drive into a tree or run over pedestrians. It still knows how to drive; it just doesn’t know where it is. When a human driver gets lost, it’s the same. No catastrophe.
keiths:
Erik:
We routinely talk about the actions of non-sentient things using language like that, and it doesn’t mean that we’re anthropomorphizing them. An airliner sees that the angle of attack is too high, so it activates the stick shaker. Its goal is to alert the pilots before the plane stalls. It wants to keep the G-load within the design envelope, so it limits control surface deflection (if it’s an Airbus, anyway*). “Sees”, “goal”, “wants” — everyone knows that we’re not attributing sentience to the plane. We’re taking what Daniel Dennett called “the intentional stance”, and few people are confused by it. You might be one of the few.
* Airbus and Boeing have different philosophies on this. On a Boeing, the pilots can choose maximum control deflection even if it will cause the design limits to be exceeded. The rationale is that the pilots should have maximum flexibility in an emergency. If the plane is about to collide with another, for instance, and full control deflection is needed to avoid the collision, then full deflection is warranted. A chance of structural failure is better than the certainty of a midair collision.
Ray Duncan, long ago, told an interesting debugging tale. Seems his team sold a version of Forth. They decided to upgrade all the video access, so they factored out all those accesses into a software interrupt (x86 land). IBM documented that software interrupts F0 through FF were available, so they grabbed FF. Tested it, it worked fine, they sold it and reports came in that running the old VDisk application made their Forth programs crash. Figuring this out took weeks and a dedicated Atron debug board.
Now, since I wrote a whole lot of BIOS code, I could have told him immediately what was going wrong. The 286 processor had protected mode, able to address memory above 1M. Problem was, there was NO WAY for the 286 to return to real mode. For VDisk to work it had to switch to protmode, do the “disk” IO, and switch back. So IBM’s BIOS folks went to work and devised a kludge.
What they did was 1) program a new command into the keyboard processor to pull a pin low that they wired to the CPU reset line; 2) add code in the kbd processor to support this; 3) Allocate byte offset 0F in the CMOS/RTC to support new commands; 4) rewrite sections of the BIOS to identify whether a system reset was due to the kbd microcontroller or something else (the BIOS itself switched the 286 into and out of protmode several times during POST anyway, to test RAM above 1M, so this kbd trick was already programmed into the BIOS), 5) write VDisk to put the magic value into CMOS, the restart address (jump target) into the CMOS data area, and send the reset command to the kbd processor. Upon reset, the BIOS in turn read this byte, did a bit of housekeeping and executed the far jump.
But the BIOS people had a problem with the stack frame. The stack frame must always point to RAM to store return addresses, but the BIOS did not own any RAM. Where to put the stack? Finally, IBM chose to set SS to 0 and SP to 0400 (hex). This is the top of the interrupt table, so any push or call instruction will trash interrupt FF (and Ray Duncan’s Forth).
Now, one of the rules for switching out of protmode on the 286 is that since the values of all registers will be lost, they have to be stored somewhere and recovered before using the stack!. VDisk broke this rule.
Again, fun times.
Footnote: Novell networks were already using interrupts F0-F7, and Duncan knew that.
I also have stories about copy protection schemes that used software timing loops breaking on the early Turbo (6.66, remember that?) processors. I developed some skill at defeating copy protection and re-assembling code, including schemes that involved physically altered floppy disks. One job I had was to write code for a system to mass-duplicate such disks for the market (mostly games).
Today, all that seems like forever ago. I always thought getting old would take longer. All that code I wrote for the 8080 and the 6502, hopefully now gone forever…
Someone besides keith’s is anthropomorphizing intelligence.
Intelligence as it applies to AI, is an observable behavior. One need not speculate about what is inside.
Unless you are a competitor seeking an advantage.
Flint:
Yeah, Intel really screwed up by making it impossible to return to real mode by clearing the PE bit. I knew it caused problems, but until now I had no idea the gymnastics that the BIOS people had to go through to work around that. On the other hand, I’m not surprised to hear that they put the kludge in the keyboard controller. That was where they put GateA20, too! They should have called it the kludge controller.
I was remiss in not including GateA20 on my list of peeves, because that actually affected the processors I worked on. They had to be GateA20-aware in order to maintain cache coherence, because otherwise writes above 1M would go to the wrong cache line. There were a bunch of other consequences too, including the fact that the self-modifying code logic had to monitor GateA20 changes. With GateA20 enabled, writes above 1M would alias to low addresses, and if you were executing in that region, you had to catch the writes even though the addresses didn’t match. I honestly don’t know if anyone ever wrote code that depended on that behavior, but we implemented it anyway. Better safe than sorry, and what’s one more kludge when you’ve already implemented a bunch of others?
Off to the Bit Bucket in the Sky, alongside my old processors.
I cut my assembly language teeth on the 6502 in college. I have fond memories of that class, because a) it was cool to do bare metal programming, and b) my lab partner became my college girlfriend. Hot and smart. A killer combination.
petrushka:
Yeah, it’s clear that he wants (or needs) intelligence, or at least human-level intelligence, to be out of the reach of machines. I’m not sure how he feels about animal intelligence.
I wish he wouldn’t be so coy about this nonphysical thingamajig (the soul?) that he imagines animates our intelligence, which machines will never possess. It would make the discussion more productive.
Just for fun.
petrushka:
Haha. We should also require a carbon offset for comments like these:
Regarding GateA20, I know of code, including the BIOS, which set CS to FFFF and IP to anything, thus accessing 64K of memory above 1M in real mode. For this to work, GateA20 had to be turned off… (and more than 1M of installed RAM)
Another footnote – the 286 designers likely believed that once the CPU was switched into protected mode, the OS would take control and if there were any way to return to real mode, the OS would not be possible. But they somehow didn’t realize that the BIOS, during POST, would need to test RAM above 1M and to make sure the interrupt segment worked, the call gates weren’t broken, and that the hidden bits of the registers were working, etc. So POST absolutely required switching in and out of protmode, and the chip designers should have known this. The BIOS also needed to handle the invalid opcode interrupt to emulate the undocumented saveall instruction. So Kludge territory wasn’t limited to the hardware.
Oh, and the chip people documented as reserved for future use the first 16 IRQ vectors, and the BIOS ignored that. Those hardware and software teams didn’t communicate very well…
Flint:
But CR0 writes (or whatever the equivalent register was on the 286) were privileged, so there wouldn’t have been any danger of rogue code going real.
When debugging X86 processors (which were all 386+ compatible), I was surprised to see OSes return to real mode occasionally during normal operation. I never bothered to figure out why, because it wasn’t relevant to the bugs I was pursuing.
I took evenly spaced snapshots of an AI image being generated on my home PC to get a better intuition about how the model navigates through image space:
The first image is pure noise, and the final image is the finished product. What surprises me is that the positioning of the major features gets fixed very early, and the rest of the generation process is a hierarchical filling in of details in that static framework. I had expected features to “wander around” a little more. It’s possible that they do sometimes, but not with the particular settings I’m using.
I made her hair long and multicolored because I wanted to see how (implicitly) aware the model was of the structure of hair, and whether it would apply the colors in a way that reflected that structure by following the strand lines.
If you hold your phone at arm’s length (or scoot away from your monitor), you can already see some structure forming in the second image.
Yes, the 286 had a CR0 register. And yes, writes to this register were privileged. But that was the whole point. The 286 designers figured that the CPU would come out of reset in real mode, and the OS would create the necessary segments and data structures, then switch into protected mode once and for all, with no danger of going real again. I think the whole concept of the POST escaped them.
They also envisioned using multiple protection levels, highest level for the OS core, lower levels for device drivers, etc. With complicated call gates and call gate structures to change levels without letting malicious code switch levels behind the OS’s back. I don’t think Microsoft ever implemented the use of multiple protection levels, though.
I was under the impression that the Motorola 68000 was the first competent 32 bit, multitasking CPU on a chip. IBM briefly made the IBM 9000 computer using this chip.
Changing the subject, one of the reasons AI driven cars will be safer than human driven is the cars have multiple cameras or sensors that can discriminate hazards from all directions, and with adequate processors, there is a minimal delay in taking evasive action. There will always be no-win situations, but the computer will never be drunk, sleepy, or distracted.
No, that title went to the HP FOCUS, released in 1982. The Motorola 68020 was released in 1984. Intel’s 386 came out in 1985.
I’m not sure we are entirely there yet. My reading is that AI driven cars tend to react properly more often than people do in most “normal” edge cases, but the worst results happen when those cars do “dumb” things people would never do.
Here’s a scenario for you: You’re driving down a 2-lane highway and an oncoming car swerves into your lane, threatening a head-on collision. What should you do? Swerve left to take his lane and stay on the road, or swerve right into the ditch? In practice, most people take his lane to stay on the road, only to have the oncoming driver realize what he’s doing and jerk back into his lane – and right into you! This happens enough so that there is legal precedent – if you try to take his lane, you are responsible for the collision (since it happened in his lane). I wonder what an AI driven care would do, but most likely is programmed to drive off the road to the right.
Flint,
I get all that, but what I didn’t get was this. You wrote:
Why would they believe that an OS wouldn’t be possible? If writes to the MSW (that’s what it was called on the 286) are privileged, then an OS can control the transitions to real mode and prevent user programs from doing it on their own.
As I mentioned, I saw OSes do it myself while I was debugging processor problems.
Recorded instances are rare.
But several things need to be said: the car can see in all directions and can quickly evaluate possible escape routes. The specific options are not programmed. The scenarios are trained, and no one can predict the actual action taken.
Another point: the cars are constantly evaluating distant objects, and in actual cases, avoid getting into desperate scenarios. There are dozens of videos of situations that could be tragic, but are avoided so smoothly that humans may not even realize the problem.
Then there are scenarios where no effective action is possible. I took a defensive driving course some years ago, and we were told to avoid head on collisions at all cost, even if it meant steering into a solid object.
Simple crash avoidance systems have been around for a while. Statistically, they are much better than humans. AI is better, and it is improving quickly.