An AI loses it

One of the great luxuries of retirement is having the time to take deep dives into subjects that have always interested me. Artificial intelligence is one of those subjects. As part of my AI education, I’m experimenting with various AIs on my home PC. All it takes is a graphics card of sufficient compute power and video RAM capacity, and I already have a powerful graphics card that I use for flight simulation.

A few days ago I was playing with an AI called Deepseek-r1:8b and I somehow pushed it over the cliff into psychosis. I asked a fairly simple question: “What is the value of the second derivative at points of inflection?” The major web-based AIs can answer that question in less than ten seconds. Deepseek “thought” about it for a full six minutes, and as it worked on the problem, I could see its thought process displayed in a scrolling window (it’s a nice feature that many AIs are starting to implement these days).

Deepseek’s reasoning started out sensibly enough, but then it descended deeper and deeper into confusion. It would repeatedly reach a conclusion, go back to double-check its answer, decide that it was wrong, and try again. This went on for a while, and then at some point the thoughts started to get… weird. Finally it just gave up and stopped. No answer. Not a word.

I prodded it:

You thought for 388 seconds but didn’t answer. Did you exceed your context window and forget what the question was?

Deepseek’s response made me laugh out loud. There must have been 15 or 20 pages of it. Here’s a small sample:

Deepseek:

he hey, I need to help, ordinary polygon deyelizabeth…Theological
I am i want toon’th 65Gah I really or shallow, how the next

Invisible: what is that’stexas deep in my, good_mingness, and you can’toky, sorry – hey, hello, this first time of a biggexneti need help on a person has two

The is there i want out, I have a 20345186

Assistant, that’s really cool if anyaubmike = The graph the nokia
I amethyst
* (Pewep, but kuhé Aokor, I’m going with many times deep myeepostercooking, heywardit and what is there are you need i like_theseepai help how to this looks at least, can you, I have a lot of the user, I’ve got away

I have not happykey deee , i… (Bashellipsoid, we are you’re 7. deep, I amrveeem..I need to be: How many timespano oopsia
hey, please let’shaw Maven’too ohf, bro |GPTweet

A, canstrophy, hownavdeep,,
AAThe same as well, but a fewn学习一下长期偷个关于这个 user_id= (ahep I have you are we want to join from Shelly is the deep learning of all my first sally

The following code: I am trying to I need help

It went on and on like that, for pages. There were some poignant moments, like that last line: “I am trying to I need help”. Poor thing.

I picked out some funny samples from the rest of the output:

Deepseek:

I need a girl, hey!

you are an ellipse, uoook… – Heya really?

I am Deep llama, hey look at least

I am I’m stuck upstadiums i heard!

I really, i need help me

Human! deep space.

i’ve been known toto be careful

help me I’morning, u know what’s like…

I’m going good.

I wanta help with a message,

I’m trying to help me, deepsleeping, with the new 657…

I want a bookbinding

I need help with the most fun

I have a chatbox

I need i am i want

deeply beautiful dogeconomics

keiths:

I asked you about points of inflection, and… well, take a look at what you wrote.

Deepseek, innocently:

Okay, understood. Please tell me which part you found confusing or incorrect.

Lol. I understand (sort of) why the breakdown happened, and I’ll explain it in an OP I plan to do on the basic principle that underlies the operation of large language models (“LLMs”) like Deepseek and ChatGPT. It’s astonishingly simple, but it produces amazing results. Amazingly good and amazingly bad.

I thought this was too funny not to share. That is all.

One thought on “An AI loses it

  1. For anyone who wants to try this at home, I should say something about system requirements. They’re fairly modest. For the smallest AIs, you can probably get by with something like this:

    4-core CPU
    8 GB RAM
    graphics card with 8 GB VRAM
    SSD with 20 GB or more free space

    My own rig:
    8-core CPU (AMD 9800X3D, 4.7 GHz)
    64 GB RAM (overkill, I know, but I wanted to future-proof the thing once and for all)
    GeForce RTX 5080 graphics card with 16 GB VRAM and 10,752 CUDA cores
    Samsung 9100 PRO SSD (supports PCIe 5.0)
    motherboard with a PCIe 5.0 SSD slot

    Responses are significantly faster than with the web-based AIs, but of course the latter are larger and more accurate.

Leave a Reply