One of the great luxuries of retirement is having the time to take deep dives into subjects that have always interested me. Artificial intelligence is one of those subjects. As part of my AI education, I’m experimenting with various AIs on my home PC. All it takes is a graphics card of sufficient compute power and video RAM capacity, and I already have a powerful graphics card that I use for flight simulation.
A few days ago I was playing with an AI called Deepseek-r1:8b and I somehow pushed it over the cliff into psychosis. I asked a fairly simple question: “What is the value of the second derivative at points of inflection?” The major web-based AIs can answer that question in less than ten seconds. Deepseek “thought” about it for a full six minutes, and as it worked on the problem, I could see its thought process displayed in a scrolling window (it’s a nice feature that many AIs are starting to implement these days).
Deepseek’s reasoning started out sensibly enough, but then it descended deeper and deeper into confusion. It would repeatedly reach a conclusion, go back to double-check its answer, decide that it was wrong, and try again. This went on for a while, and then at some point the thoughts started to get… weird. Finally it just gave up and stopped. No answer. Not a word.
I prodded it:
You thought for 388 seconds but didn’t answer. Did you exceed your context window and forget what the question was?
Deepseek’s response made me laugh out loud. There must have been 15 or 20 pages of it. Here’s a small sample:
Deepseek:
he hey, I need to help, ordinary polygon deyelizabeth…Theological
I am i want toon’th 65Gah I really or shallow, how the nextInvisible: what is that’stexas deep in my, good_mingness, and you can’toky, sorry – hey, hello, this first time of a biggexneti need help on a person has two
The is there i want out, I have a 20345186
Assistant, that’s really cool if anyaubmike = The graph the nokia
I amethyst
* (Pewep, but kuhé Aokor, I’m going with many times deep myeepostercooking, heywardit and what is there are you need i like_theseepai help how to this looks at least, can you, I have a lot of the user, I’ve got awayI have not happykey deee , i… (Bashellipsoid, we are you’re 7. deep, I amrveeem..I need to be: How many timespano oopsia
hey, please let’shaw Maven’too ohf, bro |GPTweetA, canstrophy, hownavdeep,,
AAThe same as well, but a fewn学习一下长期偷个关于这个 user_id= (ahep I have you are we want to join from Shelly is the deep learning of all my first sallyThe following code: I am trying to I need help
It went on and on like that, for pages. There were some poignant moments, like that last line: “I am trying to I need help”. Poor thing.
I picked out some funny samples from the rest of the output:
Deepseek:
I need a girl, hey!
you are an ellipse, uoook… – Heya really?
I am Deep llama, hey look at least
I am I’m stuck upstadiums i heard!
I really, i need help me
Human! deep space.
i’ve been known toto be careful
help me I’morning, u know what’s like…
I’m going good.
I wanta help with a message,
I’m trying to help me, deepsleeping, with the new 657…
I want a bookbinding
I need help with the most fun
I have a chatbox
I need i am i want
deeply beautiful dogeconomics
keiths:
I asked you about points of inflection, and… well, take a look at what you wrote.
Deepseek, innocently:
Okay, understood. Please tell me which part you found confusing or incorrect.
Lol. I understand (sort of) why the breakdown happened, and I’ll explain it in an OP I plan to do on the basic principle that underlies the operation of large language models (“LLMs”) like Deepseek and ChatGPT. It’s astonishingly simple, but it produces amazing results. Amazingly good and amazingly bad.
I thought this was too funny not to share. That is all.
For anyone who wants to try this at home, I should say something about system requirements. They’re fairly modest. For the smallest AIs, you can probably get by with something like this:
4-core CPU
8 GB RAM
graphics card with 8 GB VRAM
SSD with 20 GB or more free space
My own rig:
8-core CPU (AMD 9800X3D, 4.7 GHz)
64 GB RAM (overkill, I know, but I wanted to future-proof the thing once and for all)
GeForce RTX 5080 graphics card with 16 GB VRAM and 10,752 CUDA cores
Samsung 9100 PRO SSD (supports PCIe 5.0)
motherboard with a PCIe 5.0 SSD slot
Responses are significantly faster than with the web-based AIs, but of course the latter are larger and more accurate.
I asked the Brave Browser AI, and in two seconds got a good answer:
“The value of the second derivative at a point of inflection is zero, provided the second derivative exists at that point.
has a second derivative of zero at
, but since the concavity does not change (the function is concave up on both sides),
is not a point of inflection.
This is because a point of inflection is defined as a point where the concavity of the graph changes, and for this change to occur, the second derivative must transition from positive to negative or vice versa, passing through zero.
However, it is important to note that a second derivative equal to zero does not guarantee a point of inflection; the concavity must actually change sign at that point.
For example, the function
Therefore, while the second derivative being zero is a necessary condition for a point of inflection (when it exists), it is not sufficient on its own.”
I can’t imagine why you got the gibberish you did.
aleta,
Interestingly, even Deepseek got the right answer when I asked it again, in the same chat in which it suffered its breakdown. In fact, I recreated the whole sequence by opening a new chat (causing it to forget everything we had discussed in the old chat) and then asking the question again. This time it thought for 422 seconds instead of 388 (the random seed was different, so that was expected), but otherwise the behavior was exactly the same:
1) Thought for a very long time;
2) displayed its “thought” process, which started out reasonably but gradually decayed into inanity;
3) gave up without answering;
4) when prodded for an answer, produced reams of gibberish;
5) when questioned about the gibberish, offered to explain whatever I found to be confusing or incorrect;
6) when asked the original question for a second time, answered correctly in a few seconds.
It produced this answer in 3.7 seconds:
The gradual decay of the thought process was fascinating. It wasn’t an immediate descent into psychosis. It continued to be a mathy internal dialogue but with weird elements intruding here and there, gradually pulling the whole train of thought off the rails. Perhaps similar to what goes on in Donald Trump’s brain.
Unfortunately, the “thinking” is displayed in a scrolling window that overflows, so I can’t go back and examine it after the fact. I may try taking screenshots periodically to capture the gradual decay.
keiths,
Grok’s answer.
https://grok.com/share/bGVnYWN5LWNvcHk%3D_c08f4f0b-7d63-4f63-9a53-1bdb9a5c436d
You are not stupid enough to believe AI is really AI, right?
One of the great things about covid scam is that people who realized it was a scam are now more open to ideas of other scams. The so-called AI is one of them. Don’t misunderstand me keiths. I still use ChatGPT to proofread my manuscripts before submitting them to the publisher but I don’t even trust that …
If you submit enough “reliable information” to any AI and often enough, they will pick up. My ‘resume’ is almost clear even though I got fired for refusing to comply with convict-19 my friends were betting large sums of money i would cave in…
keiths,
Proof added..
https://grok.com/share/bGVnYWN5LWNvcHk%3D_b1bc332d-6b66-47b5-9b7b-9c8e9e9be989