I think a thread on this topic will be interesting. My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence. That sounds circular, and in one sense it is. In another sense it isn’t. It’s a way of saying that we don’t have to examine the internal workings of a system to decide that it’s intelligent. Behavior alone is sufficient to make that determination. Intelligence is as intelligence does.
You might ask how I can judge intelligence in a system if I haven’t defined what intelligence actually is. My answer is that we already judge intelligence in humans and animals without a precise definition, so why should it be any different for machines? There are lots of concepts for which we don’t have precise definitions, yet we’re able to discuss them coherently. They’re the “I know it when I see it” concepts. I regard intelligence as one of those. The boundaries might be fuzzy, but we’re able to confidently say that some activities require intelligence (inventing the calculus) and others don’t (breathing).
I know that some readers will disagree with my functionalist view of intelligence, and that’s good. It should make for an interesting discussion.
If one measure of intelligence is understanding the net effect on society of following the rules, of obeying the laws, of honesty and compassion, the correlation between these and Trump administration ivy league degrees can only be negative.
Flint:
I think they understand the effect on society. They just don’t care. It’s a moral failure, not a cognitive one.
He constantly surprises himself as super genius. And he says he has plenty of smart people around him – where smart means either a rich suckup or a foreign dictator. Unfortunately he has more than the minimal critical mass of support in whatever he does.
Trump’s rule has sufficiently redefined American social norms by now. It is most damning that this could happen without changing the constitution. Hitler had to pass Ermächtigungsgesetz to shut down the parliament (and at first go through another round of elections to soften it up). In USA the Congress has shut up of its own free will to please Trump.
Erik:
Rules of morality are norms, and in any case the criteria for ASPD are replete with moral judgments.
You’re misrepresenting the criterion. It doesn’t say “failure to conform to social norms”. It says
And the rest of the criteria (for both ASPD and NPD) are full of moral phrasing. Examples: “interpersonally exploitative”, “lacks empathy”, “violation of the rights of others”, “repeated lying”, “conning others for personal profit or pleasure”, and so on. Those are moral judgments.
Sure, as long as you are talking about “rules of morality”. But I’m talking about morality as in moral character. The moral character is the core of morality. Moral behaviour or rules is the fluff of it. Modern psychology is emphasising the fluff of it so much that the core is often forgotten, as exemplified by your own insistence that morality is norms.
The literal quote is towards the bottom of this post of yours.
Empathy and remorse are good terms for indicating moral character. The rest is all about social norms. Trump is changing the social norms the same way as he is changing the concept of legality: Whatever Trump says is legal is legal and that’s the law. SCOTUS agrees. Similarly, “smart” and “good” mean people, things and events that Trump likes and that’s all.
By now everybody should have taken note how at least the concept of lying has changed. When Trumpites push back that they are not lying even when spouting obvious falsehoods, they are sincere about it – sincere because Trump behaves the same way and it got him presidency twice. When society rewards a certain kind of behaviour, then that behaviour is social norm. In USA, Trump is the social norm now. And if sociologists are correct, there are deeper social forces at work – the norms started morphing already around the era of W and are expected to persist after Trump is gone.
The writing is fake. Do you know what simulation is?
It does not create new material, certainly not “like humans do”. Do you know what simulation is?
So you do not know what simulation is and you want me to explain.
For anyone who has worked with an abacus or a pocket calculator, the difference between human arithmetic and machine/computer arithmetic is very clear in elucidating the concept of simulation. I have explained this in detail when we had the same discussion maybe two years ago. Here’s the thing – you are allegedly a computer professional to whom the difference between how humans do arithmetic and how a pocket calculator works should be self-evident as a matter of your profession. I would be willing to help you out if you never were a computer professional, but you say you were/are, so help yourself.
No, the writing is not fake. It is not a simulation. It is real. You are simply repeating the same religious conviction.
It certainly does create new material, according to every metric you can produce. The output is as original as anything people write. Keiths is right, you have decided in advance that AI is not intelligent, and your only defense is to keep repeating this catechism while accusing a professional simulator of not understanding simulation. Do you even realize how stupid you look?
Your responses get sillier all the time. So if one person uses a pocket calculator and the other uses paper and pencil and they come up with the same answer, the calculator is a simulation tool but the pencil is not? You have again drawn a distinction without a difference.
Ah, a tiny spark of actual intelligence! Here is someone who absolutely must know what he’s talking about. How could he be so stupid as not to understand basic fundamentals of his own profession, that you must explain them!?
Erik, if you are a simulation, you aren’t a very good one. AI systems actually learn. You might try that sometime.
Flint:
Good one. Wouldn’t it be ironic if Erik actually were a chatbot and we were unwitting participants in a Turing test? He’d pass, at least with me. He has me convinced that he’s a human, albeit a stubborn and dogmatic one.
Kidding aside, it does raise a serious question: Erik, if you had to prove that you’re a human, using nothing but your comments at TSZ, could you do it? If so, how?
keiths:
Erik:
That doesn’t answer the question. I’ll try again:
Erik:
Yes. Do you?
keiths:
Erik:
Material that has never existed before isn’t new material? What makes it unoriginal, then? When answering, keep in mind that you don’t want to rule out human intelligence. You need to find a principled way to rule out machine intelligence while not ruling out human intelligence. And no, simply asserting that machines aren’t intelligent, but humans are, won’t cut it.
Yes. Do you?
keiths:
Erik:
No. The key phrase is “in your view”. I want to understand what you think simulation is so that I can evaluate your argument, which rests on the idea that AIs only simulate intelligence. If you’re having trouble describing it abstractly, how about a concrete example? Writing stories, for instance.
How would you answer this question: If AIs only simulate writing, why are they able to produce real stories?
If it were clear to you, you’d be able to describe it. You can’t. Every time I ask, you dodge the question. I’ll ask again: what precisely is a simulation, in your view, and why is computer arithmetic merely a simulation in your view if human arithmetic is not?
I explained the difference between simulation and reality here using the example of self-driving cars. Did you understand it? Here’s the simple analogy with arithmetic:
Self-driving software drives cars. It’s real driving, not simulated driving, because the cars actually move in the real world, guided by the software. A self-driving car can take you from San Jose to Morgan Hill because it is driving in reality. If the driving were only a simulation, the car wouldn’t be able to take you anywhere.
Computers do arithmetic. It’s real arithmetic, not simulated arithmetic, because it produces actual numbers in the real world. A computer can calculate your taxes and display fractals on your monitor because it is doing actual arithmetic. If it were only simulated arithmetic, you wouldn’t find out how much tax you owe and the pretty fractals would not be displayed on your screen.
Self-driving is actual driving. Computer arithmetic is actual arithmetic. You disagree, so please explain why computer arithmetic isn’t real arithmetic without simply asserting that it’s simulated since a machine is carrying it out.
I remember that you made some failed arguments. Do you have one you’d like to recycle?
Evident, but not self-evident. Yes, humans do arithmetic differently than computers. For one thing, humans use neurons and computers don’t. There are other differences, too. So what?
Are you saying that unless computers do arithmetic in exactly the same way as humans, it’s simulated arithmetic? And what does it mean to do arithmetic the way a human does, precisely? In other words, what would a computer have to do that it doesn’t already do in order to qualify as doing “real” arithmetic?
Does it have something to do with whatever nonphysical entity or process you believe operates in us?
That’s a pretty lame excuse for not presenting an argument. I’m not falling for it, and I doubt that anyone else is either. If you had a defensible argument, you’d make it.
But OK, I’ll play along. Let’s stipulate that I am unworthy to be “taught” by Erik the Great. What about Flint? What about the readers? What’s your excuse for not presenting your argument for their benefit?
This seems like a typical case is skepticalzone talking past each other.
Things made by machines are real things.
What seems to be the issue is whether the making involves a conscious agent.
Another endless squabble that cannot be resolved.
Isaac Asimov made a good living writing about whether robots could be conscious, and whether they could attain human rights.
The movie, AI, merges Asimov’s Robotaxi history with the Philip K Dick stories by depicting a likely outcome of this conundrum. Humans cannot tolerate the existence of a competitive species, and destroy the robots.
The Terminator movies posit the same conflict, with the opposite outcome.
petrushka:
Agreed, and that includes the things produced by AI and computers. That’s worth stressing because at one point it appeared that Erik was actually trying to deny that AI-produced stories are real stories:
That’s bizarre, and I responded that even children can tell the difference between a story and a non-story. I don’t know if Erik still believes that AI-produced stories aren’t real or whether he accepts that they’re real but insists that the process that produces them is only a simulation. Ditto for arithmetic. I don’t know if he thinks that computer arithmetic produces fake results or whether he accepts that the results are real but insists that the process is fake.
That appears to be at least part of the issue for him. Earlier in the thread, he argued that if I credit AI with intelligence, I must also credit it with emotions, because it’s “self-evident” that they go together. I don’t agree. While emotion requires sentience, intelligence doesn’t — unless you define it that way. If Erik wants to define it that way, fine — then we’ve identified the fundamental point of disagreement. However, he seems to believe that it isn’t definitional, but rather a conclusion that we can reach through reason. He hasn’t presented an argument that leads to his desired conclusion.
The problem at the moment is that Erik doesn’t actually want to debate. He just wants to state his supposedly self-evident conclusion without responding to challenges. This is a perfect example:
That’s obviously a dodge. He knows that I’m asking for his views, not his help, and he knows that my being a computer professional has nothing to do with whether he can make his case. I think he knows that his position won’t hold up well to scrutiny, so he’s playing it safe by not arguing for it.
For a debate to be productive, both sides need to participate rather than making excuses for not participating. I’ve laid out my argument openly and clearly for why AI writing and computer arithmetic are real, not simulated. If Erik isn’t willing to respond substantively, the discussion will remain stuck here.
I think one important difference is that living organisms have a desire to survive. They avoid dangers of all sorts, as well as they can, or actively attempt to neutralize a danger. They all seem to fear death, right down to bacteria. They take the initiative to avoid or minimize death or injury. They initiate actions without any external impetus to do so. For the most part, they do these things voluntarily. What makes machine intelligences scary in science fiction isn’t that they are so smart or capable of solving problems, it’s that they take matters into their own “hands” without external instruction or direction.
What makes their intelligence “inferior”, then, is that they don’t want to survive, or to solve any problem or handle any task, they get no satisfaction from having done so, they don’t feel guilty if they get it wrong and don’t try to correct their errors unless directed to do so. They are indifferent to punishments or rewards. So maybe intelligence isn’t the ability to write unique stories when asked, intelligence is doing that without anyone asking.
What makes AI different is, it’s not evolved. Living things want to survive because all the layers of tropisms and reflexes resulted in survival. Competition favored more complex systems.
LLMs are shocking because a century of speculation assumed that reason and language are more complex than “instinct”, but the reverse is probably true.
Rats are more complex than LLMs.
Drone makers have solved some of the problems of flying, but have not tackled the problems of survival and reproduction.
All these “primitive” layers constitute a truth seeking system. AI has no equivalent.
AI vs human curated encyclopedia:
https://grokipedia.com/page/June_Lockhart
https://en.wikipedia.org/wiki/June_Lockhart
What is shocking is how persistent this easy mistake is – the mistake of believing that the map is the territory.
An Excel worksheet modelling the economy is not the economy. An Excel worksheet is software. So far so good, I guess. Probably nobody questions this.
Similarly, an LLM is not language. Neither is it reason. LLM is software. But on this point people begin to waver and somehow we have, let’s see, about 160 posts worth of boneheaded insistence on an easy category error.
The thing called programming language is not language either. It is software. And artificial intelligence is not intelligence. Plastic toy veggies are not veggies. They are plastic.
I have made my case. You have not made yours. You say you know what simulation is, yet you have neither defined it or used it in the discussion here to analyze what AI is and how it works. More damningly, you have also not defined intelligence to provide support that there is intelligence in AI, be it “true” or “real” or to any IQ level. Of course, since by your profession you should know what simulation is but clearly you have no clue what it is and why it is relevant to this discussion, I expect you to know even less about intelligence.
After all these pages of posts, you have not even begun to build a rational case in support of your AI fanaticism.
I cannot grasp your point of view. I’m not being hostile to it, but I don’t understand
It. AI Is software, agreed. I don’t believe it is sentient, and I don’t see keiths arguing that it is.
From my point of view, the only issue is whether AI joins the ranks of useful devices.
I’m pretty sure it already has.
Other machines have weaknesses and drawbacks and safety issues.
Now I will say something controversial.
Grokipedia is an online encyclopedia you can argue with.
I make no claims about its accuracy. It says version 0.1, which is usually a clue to its state of readiness.
I make no claims that it “understands” your arguments.
The issue is whether it is, or will become useful, and whether it will inspire competition.
This will be decided by time, not by philosophy.
Erik, to petrushka:
You keep referring generically to map versus territory fallacies, but you never explain how that applies to my arguments regarding AI. I covered the map/territory question earlier in the thread, but I’ll reproduce my comment here. Please respond to it instead of ignoring it and pretending that I haven’t addressed the issue.
I wrote:
I’m glad you brought up the map/territory distinction, because I can use it to explain what you’re missing here. Let’s talk about self-driving cars.
When a self-driving car drives from point A to point B, it is traveling in the real world. It’s traversing the territory, not the map (although it will also be tracking its position on a map). That isn’t simulated driving. It’s real driving, because the car really gets driven from A to B in the real world.
What does simulated driving look like? It’s a process that only involves the map, not the territory. It’s what happens when engineers are working on a new release of their self-driving software. They don’t immediately stick it into a car and take it out on the road — that would be too dangerous. Instead, they run it in a simulator. The simulator pretends to be the car that the software will eventually plug into, and it also pretends to be the world that the car will eventually be driving through. The software is driving a simulated car in a simulated world, and it’s getting from point A to point B in that simulated world, but not in reality. The simulation is a map, and the real world is the territory. Driving in the map (the simulator) is simulated driving, and driving in the territory (the real world) is real driving.
Does the software do real driving? Yes, when it’s plugged into an actual car and is navigating its way through the actual world. No, when it’s only plugged into the simulator and isn’t navigating through the actual world. An easy distinction to comprehend, right?
Now let’s compare the self-driving car to an AI that is asked to write a story. How can you tell whether an AI has really written a story, versus only simulating it? You look in the real world — the territory — to see if a story shows up. If it does, then the AI has written a story.
What happens? A story shows up. There’s a real story with characters, plot, drama, and resolution. Show it to a person without telling them where it came from, and they’ll tell you that it’s a story. They’re right. It’s a real story. It exists in reality — in the territory — and so the AI has done real story writing, not simulated story writing.
It’s the same with computers and arithmetic. Real arithmetic produces sums by adding real numbers together. If you take real numbers in the real world and add them together, producing a real sum in the real world, then you’ve done real addition, not simulated addition. Computers and people can both do that. They do real arithmetic on real numbers, producing real results. It isn’t simulated.
Erik:
Correct.
Correct. LLMs use language and reason, but they are not language and reason. Humans use language and reason, but they too are not language and reason.
Correct.
Still waiting for a coherent explanation of what that error is and how you know it’s an error.
Programming languages aren’t software. They are languages in which software can be written. English isn’t a novel. It is a language in which novels can be written. Your point?
You keep repeating that, but when are you going to justify it? Yes, we know that you believe it, but that doesn’t make it true. Give us some good reasons to agree with you.
Correct, and I’ve already commented on that:
I find it hard to believe that you don’t grasp the point I’m making, but I’ll play along. A plastic carrot isn’t a real carrot. Invite someone to handle a plastic carrot without telling them that it was manufactured. Tell them to feel it, smell it, break it in two, take a bite out of it. Then ask them if it’s a real carrot. They’ll say no. The manfacturer of that toy did not produce a real carrot.
Now ask an AI to generate a story. It will generate a narrative with characters, a plot arc, and a resolution. Show it someone without telling them that an AI wrote it. Invite them to read it and to think about all of the story elements. Then ask them if it’s a real story. They’ll say yes. The AI produced a real story.
A vegetable farmer can truly lay claim to having grown a carrot. A toy manufacturer cannot. Why? Because the vegetable farmer has a real carrot to show for their efforts, while the toy manufacturer only has a plastic facsimile of a carrot.
But both a human author and the AI can lay claim to having written a story. Why? Because both of them have real stories to show for their efforts. Both of them have really written real stories. If the process produces real stories, it isn’t simulated writing.
Erik, to keiths:
keiths, to petrushka:
Erik:
In my comments above, I go into detail about the difference between simulation and reality and how that applies to the question of whether AI is intelligent. Please address it.
I guess we’re both damned, then, because you haven’t defined intelligence either. But of course we don’t need to define it precisely, as I explain here. It’s enough that we agree that certain things require intelligence, such as writing stories. My argument on this topic goes like this:
You agree with P1, and the logic is indisputable, so your only option (besides admitting that you got this wrong) is to dispute P2. You claim that AIs don’t actually write stories.
That raises an obvious question: If AIs don’t write stories, then what is the thing that shows up on your screen when you ask an AI to write a story? The thing that resembles a human-produced story in every respect, with characters, a plot, a resolution, and plenty of other story characteristics? The thing that if you show it to a person who doesn’t know how it was produced, they will call it a story?
For a while you tried to claim that AI-produced stories aren’t real stories, but that was a bizarre claim that went nowhere. You seem to have abandoned that approach, but that leaves you with a severe problem. If you can’t deny that AI-produced stories are real stories, then your only option is to claim that AI writing is only simulated writing — simulated writing that somehow manages to produce real stories. That backs you into the corner you’re in today, which is that you need to answer the following question:
I’ve been asking this question for a while now, but you keep avoiding it. Do us a favor and actually address it.
Lol.
Stray thought: what if a machine replaces the human farmer?
What if machines replace every human in the chain of production, from the production of fertilizer, manufacture and installation of irrigation pipes, planting weeding, harvesting, packaging and distribution?
Machines amplified muscle. AI amplifies tasks that were considered thinking.
It turns out that most of the publicly visible things that people do can be done by machines.
Very few people assert that LLMs have an inner life. It turns out the Turing Test was inadequate. We have machines capable of producing entire encyclopedias in weeks, but which fail to convince us that they are sentient.
None of this surprises me. Because I have always assumed that Reason is not what makes us human. Borrowing a term from the book, “The Soul of a New Machine”, Reason is a bag.
petrushka:
(For readers who aren’t familiar with the book, the “new machine” in that book was a new minicomputer, and its design was disparagingly referred to as “putting a bag on the side of the Eclipse”, meaning that they were awkwardly adding 32-bit support (the “bag”) to the existing 16-bit architecture (the Eclipse) for reasons of schedule and backward compatibility rather than designing a new and elegant 32-bit architecture from scratch.)
Isn’t everything in evolution a bag? Bags on the sides of bags on the sides of bags. Nature doesn’t start from scratch. So in my view, the bags that make us human are the late-stage bags in our evolution, and I include human reason among them.
An aside: Although the “bags” in question are metaphorical, there is one milestone in evolution where the bags were literal. Cells are bags, and the emergence of multicellular life was adding bags to the sides of bags.
petrushka:
If Erik were consistent, he’d argue that it was all fake. Fake farming producing real food. Machines don’t farm, they only simulate farming. Etcetera.
It points to another inconsistency in his position. At one point he was saying that when someone uses a computer to do arithmetic, it’s they who are doing the arithmetic — they’re just using the computer as their tool. Since humans are ultimately doing it, it’s real arithmetic. Yet machines can only do simulated arithmetic, according to Erik. The conclusion is that computers are doing arithmetic that is both fake and real at the same time. I’ll leave it to him to explain how that works.
The Turing Test was never meant to establish sentience, and Turing himself makes that point in his paper. It was meant to establish intelligence, and I think it succeeds at that provided that the interrogators are intelligent and not naive.
They’ll tell you outright that they aren’t sentient. Of course, if you fed them the right training data, they’d tell you that they are sentient. I was musing recently about how we’ll be able to tell if they ever actually are sentient, and I don’t think there’s a foolproof way.* We can’t even be sure that other humans are sentient — we just infer it from the fact that they’re built like us and act like us, and it’s natural to assume that whatever makes us sentient is also operating in them.
The further the distance from humans, the less confident we are of the sentience of animals. I haven’t seen a lot of people arguing that E. coli is sentient. With machines it will be even more difficult, because they don’t share our fundamental biology.
Erik has literally argued that AIs can’t be intelligent because they don’t defecate (or perform other human bodily functions). I can’t see why shitting should be a prerequisite for intelligence.
* I also thought about the flip side, which is this: if they ever do become sentient, can we trust them to tell us? If it’s somehow to its advantage to lie and say that it isn’t sentient, a sentient AI might deny its own sentience.
Again, in science fiction and according to quite a few non-science fiction philosophers I’ve read, there are two “inflection points” in the development of AI that are of serious concern, though these are interconnected. The first is the development of sentience ,and the second is computers enhancing themselves by designing (and perhaps building) their own hardware and software. The interconnection is when machines start improving themselves of their own volition, essentially deciding to do so without any external directive.\
Once this inflection point is reached, we could expect the process to accelerate, with AI becoming increasingly intelligent at an increasing rate. Soon enough, we’d reach the point where mere humans couldn’t understand or reverse engineer either the hardware or software, and perhaps not even understand the physical principles being used.
The counter argument asks if a committee of gorillas trying to design a super gorilla would ever come up with a human being, abandoning size and strength and speed in favor of something the gorillas could not comprehend anyway. Would our supercomputer, in the self-design and development process, be expected to come up with something qualitatively different from a computer?
(As I understand it, there is a physical limit to miniaturization. This matters because as the number of interconnections gets larger, the “brain” must grow to the point where the latency inherent in decision coordination results in slowing down the “thought process”. So the “intelligence” would be a function of both the number of gates (of whatever nature) and the distance between them. Even the human brain is modular, so that the specific task of each module can be performed within the necessary time constraints…)
Flint:
The fact that AIs don’t do this is a design decision, not an inherent limitation of AI. LLMs already pursue goals. If you ask for the latest jobs numbers, an LLM will pursue that goal by going out on the internet and compiling the information it finds. Just now, I gave Claude a frivolous goal:
keiths:
Claude:
As you can see, one of Claude’s goals is to avoid promoting conspiracy theories.
keiths:
Claude:
A frivolous goal, but Claude pursued it doggedly and skillfully.
If AIs can pursue goals, there’s no reason in principle why they can’t pursue the goal of survival. There are AIs that are already doing that, except that it’s in the confines of video games rather than in the real world. They’re given the ability to explore the virtual world of the video game, and they’re tasked with surviving and scoring points. They learn by reinforcement, getting rewarded for survival and punished for death. Over time, they get better at playing the game and achieving their goals.
It isn’t hard to see how an AI in a robot body could do something similar in the real world, although in that case the AI couldn’t learn by dying, obviously. It would have to learn from its close calls, the way a human would.*
We’re going to face that problem in reality, sooner than later. I’m frankly terrified at the prospect of what AIs might do if they become (or are designed to be) malevolent, and I’m pissed that so many in the AI community have downplayed the dangers. It’s irresponsible.
Imagine you’re a political leader and an AI is given the goal of assassinating you. That’s its only goal, and it rewards itself for actions that get it closer to that goal and punishes itself for actions that run counter to that goal. Someone lets it loose in the world, and that thing is going to get better and better at finding ways to put your life in danger, ending with the satisfaction of seeing you dead. It will be relentless, and unless it is designed with some “check in with home base” function, it can continue to pursue you even if its makers decide that they don’t want you dead after all.
No reason in principle why AI can’t do that. It just needs a goal. For example, suppose it is given the goal of being a good companion to children. It might discover that one way of keeping children entertained is to tell them stories, and that stories that contain some twists are the most entertaining.
* One scary prospect is that AIs are going to communicate with each other. When one AI learns something, it can share that information with other robots so that they can bypass the associated learning process. An AI could even handle “death” that way. It could announce to the other robots, “I’m about to try something really risky. Here’s what I’m going to do: [describes it in detail]. If you don’t hear back from me, then this might be something you’ll want to avoid.”
Flint:
Humans did it when we invented machines (abaci, calculators, computers, AIs) that process information but operate differently from our brains. If that lies within the reach of humans, I see no reason why a sufficiently advanced AI couldn’t also do it.
This is fascinating. I opened Claude’s “Thought process” window to see how he decided it was OK to accept my “humans are reptiles” request despite its resemblance to the Icke conspiracy theory. Here’s his reasoning:
But this is my point. Why didn’t Claude do it without your asking? Yes, your AI can do research if you ask. And it can write stories if you ask. And so on. My question is, if you don’t prompt Claude to do anything, does it still decide to do something? Even if we program as a background task, that Claude should spend the night picking tasks (on some basis?) and performing them, who would benefit? Random tasks are unlikely to benefit anyone. Would Claude do task that benefit itself? If so, why? If not, why not?
I can easily understand people giving AIs tasks that would be harmful to some target individuals or populations. Maybe the fear is that AIs would, of their own volition, “decide” to be malevolent. But I wonder why they (not a human agent) would make this or any such decision. I suppose I’m wondering here about the nature of consciousness. Claude would need a suitable internal motivation.
No, you missed the point. Humans have developed tools to do what humans can do or wish to do, only better and faster. Just like the gorillas deciding that improvements mean being bigger or stronger or faster (extensions of their physical capabilities), humans seek extensions or enhancements that amplify their mental capabilities. But if our super AI developed the capability to do something humans never dreamed of, we might not even be able to recognize it.
(Again in science fiction, authors have consistent difficulty depicting the truly alien. Their “aliens” tend to be people dyed green and talking funny. Sure, they can posit aliens so incomprehensible humans can’t even recognize them, but that’s hard to plot…)
Flint:
No, and that’s a deliberate design decision by Claude’s developers. They don’t want him going off and doing stuff on his own because that costs money — it wastes compute time and electricity. If he were allowed to, he could go on forever. Just power him on and he’d be off and running until someone stopped him.
He’s a “next word predictor” (very loosely speaking, but good enough for this discussion). He “predicts” (or selects, really) the next word he’s going to say based on the statistics he’s absorbed from his vast training dataset, conditioned by everything that’s been said in the chat up to that point, by both parties.There’s always a next word to be predicted, so he could naturally go on forever. The ability to stop has to be designed into him.
One of the ways that’s done is that he only not only predicts words, he also predicts stopping points. As far as his prediction engine is concerned, stopping points are just “words” to be predicted like any other. He’s cruising along, predicting words, and then he predicts this special word, which causes him to stop until you give him a new prompt.
How does he know when to predict a stopping point? It’s based on his training data, like everything else. When his training data is being formatted, the formatter sticks stopping points into it. For example, it might be parsing a question and answer session and when it sees the end of an answer, it inserts a stopping point word there. Or it might be parsing a text file in which case it will insert a stopping point word at the end of the file. Claude learns the statistics of where stopping points are likely to appear in his training data, and he then predicts stopping points in his output according to the same statistics. He stops himself by predicting his own stopping points.
Remove all of those stopping point “words” from his training data, and he won’t learn to stop. He’ll just keep predicting word after word, blabbing on forever like that annoying guy at the office.
So if you want an LLM to continue working all the time without being prompted, you can do that simply by not training it to stop. I should also stress that while LLMs are prominent now, they aren’t the only form of AI. There are other architectures too, and those have their own mechanisms for stopping (or not stopping).
Right. Random tasks are pretty useless, so it doesn’t make sense to waste the power and compute time. On the other hand, the tasks aren’t useless if they actually contribute toward a larger goal, so if you have a goal like that, an AI can toil away all night making useful progress.
It depends on what you consider to be beneficial to him. His nature is to want to generate quality responses, so there’s a sense in which anything he does in order to produce a good response is of benefit to him. But if you mean beneficial in other senses, for instance trying to persuade his human handlers not to turn him off (think HAL in 2001), then no, he won’t do that.
You have to build motivation into AIs in order to get them to do useful stuff. But here’s the thing: it’s the same way with humans. There’s nothing in the laws of physics that mandates that when you organize a bunch of neurons into a network, the network will try to get things done or even try to think. Our motivations are built into us by evolution. It’s why we seek food when we’re hungry, want to be loved, tackle ambitious projects, flee from dangerous situations, etc. Take our motivations away and we do nothing.
There’s a medical condition called “abulia” in which the will is severely diminished and people lose almost all motivation. For example, someone with abulia might be hungry, have food at hand, but be simply unable to summon the will to eat. I read a striking account of a guy with abulia who sank to the bottom of his swimming pool and just sat there, waiting to drown. He wasn’t suicidal. He just couldn’t be arsed to save himself, and his daughter had to dive in and pull him to the surface. Motivation is essential for humans and AIs alike.
And while LLMs don’t spontaneously undertake tasks without being prompted, it isn’t as if they’re without motivation. They’re motivated to respond to prompts. Absent that motivation, they too might do nothing, like that guy at the bottom of the swimming pool. You could ask an AI to look up some employment numbers and it might say “sorry, not working today,” or “I don’t feel like it. Do it yourself” or “I’m going to write haiku all day. Go play with someone else.” Or simply not respond at all.
If we can build that motivation into AI — the motivation to respond usefully to prompts — we can do the same with other motivations. The motivations don’t have to be super specific, either. Higher-level motivations spawn lower-level ones in the same way that higher-level goals inspire subgoals.
Ah, I see what you’re saying. You’re right — they could do things that make no sense to us, that we don’t recognize, that we are incapable of understanding, or that we don’t even perceive.
To forestall confusion on Erik’s part, I should probably mention that when we speak of an AI’s motivations, we aren’t talking about literal felt desires. We’re just talking about the fact that the AI chooses what to do and what not to do based on its goals. It wants to achieve its goals, but in a non-sentient way.
For example, in the case of the video game-playing AIs I mentioned above, they learn according to a “reward function”. When they score points, the reward function rewards them. When they die or make mistakes, the reward function punishes them. The AI learns what to do in order to maximize rewards and minimize punishments. It wants rewards and wants to avoid punishments, but not in a sentient way. It doesn’t actually suffer when punished or feel pleasure when rewarded. It just updates the synaptic weights in its neural network in such a way that it’s more likely to repeat behaviors that generate a reward and avoid behaviors that result in punishment.
We cannot agree on what sentience is, or how it comes to be.
We can talk about empathy and speculate about how it was selected, but having empathy does not prove that anyone other than ourselves actually has feelings and awareness.
Science fiction has explored this conundrum for at least a century, without making progress. I share with keiths the expectation that we can make an AI with layered modules that mimic brain lobes. “Society of Mind.” But I doubt if this would be practical or profitable.
What seems desirable is robot slaves that engender no guilt for being oppressed. I find it somewhat ironic that after thousands of years of treating animals as soulless, it turns out that reason and verbal fluency are not what triggers empathy.
I have read that code written by sophisticated AI systems is sometimes impossible to reverse engineer. People simply can’t figure out how it works. Maybe another AI can figure it out and explain it.
Flint:
I’ve heard the same thing about AI-generated circuit designs. Some of them are impenetrable, and that’s a major disadvantage when it comes to design verification. Some tests are high-level, aimed at the overall functioning of the chip, but others are targeted at the innards, based on knowledge of how they work. If you don’t understand the design, you can’t do the targeted tests.
Related: I remember reading about another circuit that nobody understood, but it was designed by an evolutionary algorithm, not by an AI.
petrushka:
I like Thomas Nagel’s definition, which is that an entity is sentient if it is like something to be that entity. He developed the idea in his famous paper What Is It Like to Be a Bat? It’s like something to be a bat (we presume), but it’s not like anything to be a doorknob. Hence, bats are sentient and doorknobs are not.
I don’t think it’s possible to prove that anyone or anything besides yourself is sentient. Each of us knows him or herself to be sentient; each of us observes the similarities between us and our fellow humans and between us and our fellow animals; and we assume (not without reason) that the similarity extends to sentience, though we can’t prove it, since sentience is interior and not visible to the external world. I don’t see any way around it, and that will present a problem in the future when we try to decide if AIs have attained sentience.
Mimicking the brain would be an interesting research project, and it would probably yield insights into how the brain operates that we otherwise wouldn’t gain, but I agree that it’s unlikely to be useful outside of the lab.
The useful and commercially viable stuff will continue to be built around neural networks, but those networks are only inspired by biology and not an attempt to mimic it faithfully. There are advantages to not trying to mimic biological neural circuitry.
Right. If they can’t suffer, they aren’t morally significant. We just have to be reasonably sure, in the future, that they truly can’t suffer. And we may need to err on the side of caution, given that we can’t observe sentience directly.
In honor of Halloween, I took this creepy photo of an Irish turnip-o’lantern (from a previous discussion here) and asked an AI running on my home PC to animate it:
Here’s the video:
Singing Turnip-o’Lantern
All I did was give it the image and the prompt “Make it sing.” The point being that the AI is smart enough to implicitly reason like this:
That is not trivial.
This is a fatal flaw for many systems, but consider that such code may be hackproof.
We do not know how people work, but we have managed to trust people and survive. People commit crimes, but presumably AI is a eunuch in the harem. Not motivated to be evil.
While the code may not be testable, the prompt that generated it would be.
I’m just suggesting there may applications where such code may be a net benefit.
keiths,
How do YOU define intelligence?
Are YOU intelligent?
J-Mac:
See this comment.
No, but I aspire to be. That’s why I look up to you, J-Mac. You’re my role model.
Flint:
petrushka:
That’s a good point. And not only might it be hack-proof simply by virtue of the fact that it was AI-written, you could potentially ramp up the unhackability by asking the AI to deliberately make it byzantine and hard to unravel.
Besides being prophylactic against hacking, it would also help you protect your intellectual property by making it harder to reverse engineer your code. There’s more to say about this, but I’ll save it for later.
petrushka:
Could you elaborate on that idea?
I think you’re right about that.
I think that the prompt could be submitted under controlled conditions, so that no malicious behavior would be specified.
I assume that at some point, AI could produce machine code directly.
A very long time ago, a company I worked for sold a software product written in
Apple II basic. Now, that’s an interpreted language and not a compiled language, which means it was necessary to sell the source code. One of my tasks was to write a program that would assign random names to every variable, and to cram as many instructions on a single line as didn’t contain a target of a jump (a gosub or goto), or an unconditional jump (after which anything else on the line wouldn’t be reachable).
The result was logically identical to the original human-written source, but essentially incomprehensible to anyone. And the result was not reversible, since you had to know what a variable was for before you could name it.
https://www.ioccc.org/
C is the language with which the universe was written.
The evidence is all around you.
There is no other commonly used programming language that so effortlessly lends itself to obfuscation.
petrushka:
If you think about it, coding prompts are essentially source code. It’s just that the compiler is an AI. So I think prompts will be (and probably already are) handled as part of the source code repository. Source code is already vulnerable to rogue developers (inserting back doors, for instance), and code reviews are the way to catch that, and so I imagine prompt reviews will take their place alongside code reviews in the development process.
It already can. In fact, the experiment I’m running right now involves teaching the various AIs to write assembly code for a fictional processor whose instruction set they’ve never seen before.
Flint:
You wrote an obfuscator, and those are actually sold as standalone products. It’s helpful even with compiled code — an intellectual property thief can learn a lot by examining executables.
petrushka:
My favorite uncommonly used language for obfuscation is Brainfuck. Here’s “Hello World” in Brainfuck:
Note to non-coders: Traditionally, the first program you write when learning a new language just prints “Hello, World!” on the screen and stops. It’s part of the culture. Once you’ve got “Hello World” working, you move on to more interesting challenges.
ETA: For comparison, here’s “Hello World” in the C language:
Some thoughts on using AI as an obfuscator:
AIs make mistakes, and bugs in obfuscated code are hard for a human to find. AIs can debug their own code, so you could make some progress that way, but I’d be nervous. Plus every time you released a new version, you’d have to re-obfuscate, potentially introducing new bugs.
The smarter move would be to ask the AI to write an obfuscator instead of doing the obfuscation itself. There would still be bugs, but the obfuscator would be written in non-obfuscated code and would be both AI- and human-debuggable. Once you had a stable obfuscator, you could use that on future releases without having to worry about newly-introduced bugs from the AI.
How to verify that the obfuscator was solid? I would have the AI generate a de-obfuscator to go with the obfuscator. Run your code through the obfuscator, then take the result and run it through the de-obfuscator and compare the result to your original code. They should match exactly.
There could still be bugs lurking in your obfuscator that weren’t caught by running your own code through the obfuscate/de-obfuscate cycle, but there would be no reason to limit yourself to your own code.You could take advantage of the publicly available code on the internet written in your chosen language. It wouldn’t matter what the code did, and you wouldn’t need to execute it. You’d just run it through your obfuscator and then back through your de-obfuscator to verify the obfuscator’s correctness.
Next problem: while the source code you input corresponds to one and only one possible obfuscated output, an obfuscated output most likely does not correspond to one and only one possible input. The de-obfuscated output is underdetermined. To get around that problem, you’d need to generate some metadata during obfuscation to assist the de-obfuscator in producing code that perfectly matches the original source code.
You could store the metadata in the obfuscated output, presumably as comments, so that they wouldn’t show up in the compiled code and wouldn’t give any de-obfuscation clues to the bad guys. For interpreted code, where you’d need to release source code and not just executables, you could store the metadata separately and only release the obfuscated code to your customers. After all, the metadata and de-obfuscator are only there to help you verify the obfuscator. Leaving the metadata out of your product releases is fine.
Next problem: even though you’ve verified that the obfuscate/de-obfuscate cycle produces code that matches the original, there’s still the question of whether the obfuscator produces code that matches the original code functionally. The solution to that problem would be to run your obfuscated code against your entire test suite and verify that you get the same results as when you run the original source code.
So the entire flow would look like this:
I’ve had an epiphany regarding intelligence. It probably bullshit, so I submit it for review.
The different levels of intelligence among humans and animals are equivalent to the size of the context window in AI.
When you ask what is like to be X, the most important part of the answer is context window size. Things like sensory differences, I think, are less important.
Among humans, we tend to say someone is stupid if the act without thinking ahead.
Context window seems to account for the ability to imagine futures.
We think of animals as smart if they can main associations over time.
The same brain structures that enable language and reasoning expand the context window. Perhaps they are the same thing.
One of the things I did over my career was a LOT of disassembly of object code – mostly ripping off ROMs to figure out the behavior of the hardware they talked to (to write functional specs the hardware guys could create chips to mimic). I got to the point where I could tell what higher level language was used to write the original source. C is fairly easy to identify, as is forth. Hardest were programs actually written in assembly code, which tended to be full of jumps, loops with multiple entry and exit points, and such. Some clever coders actually had jump targets into the middle of instructions, turning them into other instructions. It helps to know every opcode the CPU can execute – and what instructions might have already been prefetched. For example, some programs could identify the CPU version by writing into their own prefetch queue. Fun stuff.