I think a thread on this topic will be interesting. My own position is that AI is intelligent, and that’s for a very simple reason: it can do things that require intelligence. That sounds circular, and in one sense it is. In another sense it isn’t. It’s a way of saying that we don’t have to examine the internal workings of a system to decide that it’s intelligent. Behavior alone is sufficient to make that determination. Intelligence is as intelligence does.
You might ask how I can judge intelligence in a system if I haven’t defined what intelligence actually is. My answer is that we already judge intelligence in humans and animals without a precise definition, so why should it be any different for machines? There are lots of concepts for which we don’t have precise definitions, yet we’re able to discuss them coherently. They’re the “I know it when I see it” concepts. I regard intelligence as one of those. The boundaries might be fuzzy, but we’re able to confidently say that some activities require intelligence (inventing the calculus) and others don’t (breathing).
I know that some readers will disagree with my functionalist view of intelligence, and that’s good. It should make for an interesting discussion.
In discussing a mathematical result with Claude (OP forthcoming), I used the made-up word ‘numerize’ to describe the conversion of a predicate (which can be true or false) to a number — 1 for true, 0 for false. ‘Quantize’ is already taken, with a different meaning, so I settled on ‘numerize’. I like to play with language and it can be fun to test AI’s ability to recognize neologisms and infer their meaning.
My prompt was
Claude immediately understood what I meant and responded appropriately. He has abstracted the idea that adding -ize to a noun or adjective creates a verb that means “to bring about X”, where X is the antecedent. This isn’t something you’d intuitively expect from a system that is fundamentally built on next-token prediction, and the fact that AI is able to do it is yet more evidence that AI is truly intelligent.
petrushka,
I saw that paper too. I think I’ll do an OP on it.
This sort of thing isn’t promising…
…but I’m sure the thread will end up being about AI consciousness generally, not just this paper.
This is not scientific, but I think consciousness begins with tropisms and evolves to support survival.
I don’t think you can evolve consciousness without evolving layers of survival mechanisms.
Trying to build top to bottom would be like trying to program the weights in an LLM from first principles.
As a kid, I was fascinated with the mechanics of reading. It struck me that if someone were sitting across the table from me, it was surprisingly easy to read whatever they had in front of them despite the text being upside down from my perspective. That led me to experiment with holding a book up to a mirror and reading the reflection, which was harder, and then reading the reflection when I held the book upside down, which was the hardest.
I was recently reading about the VWFA (aka the Visual Word Form Area), a brain region responsible for recognizing characters and words, and it reminded me of my childhood experiments. I wondered how much practice it would take to read inverted, mirrored, and inverted + mirrored text at speeds comparable to my normal reading speed. I could grab a mirror and practice, but it would be clunky physically and a pain to measure and record my words per minute scores as they gradually increased.
Then, as with practically every problem I tackle these days, I asked myself if AI could help. I described the project to Claude and had him write a program that could display text files in all of those orientations while measuring and recording my reading speed. I also asked him to support normal orientation so that I could get a baseline for my reading speed.
In less than five minutes, he produced the program. He also found an online corpus, the CLEAR corpus, that contains 5,000 passages used for reading research, each of which is tagged with its reading difficulty.
The program loads the passage in the specified orientation. I hit the space bar to start the timer, read the passage, and then hit the space bar again to stop the timer. The program computes the wpm (words per minute) score and stores it in a database along with the filename. When loading a passage, it checks the database to make sure I haven’t used it before, in order to avoid any practice effects. (That’s probably overkill, but Claude suggested it and I saw no reason not to implement it, since he was the one doing the work.)
The program is about a thousand lines and takes full advantage of the available Python libraries. The only bug was that Claude forgot to implement wraparound, so the entire passage appeared on a single line. He easily fixed that.
I played with the program and asked for some additional features. The CLEAR corpus contains difficulty ratings for each passage, so those are now stored in the results database. Claude even suggested that he could compute difficulty ratings for non-CLEAR passages using the Flesch-Kincaid scale, so I had him do so. He noted that when reporting stats, he could compute a correlation coefficient between my wpm performance and the difficulty ratings of the passages, so I approved that change too.
I also asked him to make the font selectable, because fonts vary wildly in their readability when reoriented. The font is now recorded for each run.
It was fascinating to watch him code, because he tested everything himself before delivering the final product. This technology is frikkin’ amazing. And also genuinely scary.
Sample screenshots so you can try it for yourself:
Normal:

Flipped vertically:

Flipped horizontally:

Flipped vertically and horizontally:

ETA: Found one additional bug: Em dashes were being rendered incorrectly because the program assumed UTF-8 encodings when the passages were in CP-1252. Only two bugs in a thousand lines of nontrivial code.
I have no difficulty reading any of these.
It’s slow going at first, and on some words I have to go letter by letter.
But that’s with zero practice. And I’m old.
The ordering of words is arbitrary and conventional. A young person with a week’s practice should have no problem.
I’m reminded that people have adapted to image reversing goggles.
petrushka:
That’s the point of my experiment. We’re slower at reading the odd orientations, and I want to see how quickly the speeds improve with practice and whether they hit a plateau. I suspect they will.
The letter-by-letter phenomenon is interesting because it’s similar to learning to read for the first time. You’re consciously sounding out words rather than just recognizing them. When the Ukraine war broke out, I taught myself Cyrillic so that I could understand the writing on the signs I was seeing in photos and the place names on maps. It’s still mostly a letter-by-letter affair, though I do recognize some words on sight now, like Путин (Putin) and Зеленський (Zelenskyy). Then again, I’m not getting much practice. I don’t understand Russian or Ukrainian, so I can’t read news articles. It’s mostly just signs and maps.
petrushka:
Reading from right to left comes pretty naturally, because that’s what we have to do if someone is sitting across from us and we’re reading what they have in front of them. It’s the word and letter recognition that becomes harder, not the reading direction.
I suspect it’s like being bilingual.
Up to a certain age it’s easy. After a certain age, you have to translate.
I’m watching a lecture series on Language and the Mind and today, coincidentally, the lecturer mentioned a cool study on the relationship between reading direction (left-to-right vs right-to-left) and spatial metaphors for time: