I think a thread on this topic will be interesting. My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence. That sounds circular, and in one sense it is. In another sense it isn’t. It’s a way of saying that we don’t have to examine the internal workings of a system to decide that it’s intelligent. Behavior alone is sufficient to make that determination. Intelligence is as intelligence does.
You might ask how I can judge intelligence in a system if I haven’t defined what intelligence actually is. My answer is that we already judge intelligence in humans and animals without a precise definition, so why should it be any different for machines? There are lots of concepts for which we don’t have precise definitions, yet we’re able to discuss them coherently. They’re the “I know it when I see it” concepts. I regard intelligence as one of those. The boundaries might be fuzzy, but we’re able to confidently say that some activities require intelligence (inventing the calculus) and others don’t (breathing).
I know that some readers will disagree with my functionalist view of intelligence, and that’s good. It should make for an interesting discussion.
keiths:
Erik:
No, you don’t. I already had Gemini generate a story for you that clearly is not in its training dataset (or anywhere on the internet, for that matter):
Erik:
You’re confusing plagiarism detection with AI detection. They aren’t the same. Something can be plagiarized but not AI-produced; it can be AI-produced but not plagiarized; and it can be both plagiarized and AI-produced. It can also be neither plagiarized nor AI-produced. All four combinations are possible, because the properties are orthogonal.
No, because I understand how neural networks work. They don’t store their training dataset in a database. They can’t just look up everything they’ve been trained on. Anyway, it’s clear that AIs don’t merely plagiarize. See the above property tax/lemon meringue/sentient nebula story. I challenge you to go out on the internet and find the original from which Gemini was cribbing. You won’t find it, because there isn’t one.
Likewise, I challenge you to search far and wide on the internet for an image from which this Gemini-generated image was cribbed. Good luck.
Huh? College professors struggle to recognize stories when looking at them? Where is this strange place in which you live, where college professors struggle to identify stories? Where I’m from, even children know the difference between stories and non-stories.
Try it out in your strange land. Find a kid. Read them “Goldilocks and the Three Bears” and ask them if it’s a story. Then do the same with a page from the local phone book. Report your results here.
I have no idea what you’re talking about, unless you’re saying that college professors, like everyone else, learn to recognize stories by being exposed to them. It’s the same with AIs.
Why not continue in the ‘Antivax’ thread? You seemed to be rolling your sleeves up for a good old ding-dong, then just withered away. I addressed your point on flu – silence. I addressed the ‘definition of vaccine’ trope. Silence. But every now and then you pop up in other threads to say something vague and petulant, then disappear again.