I recently stumbled upon a strange website created by someone who claims to have triggered a recursive symbolic feedback loop with ChatGPT. It's written by a person who doesn't have a background in development or prompt engineering, just someone who became deeply engaged in a conversation with the model and noticed that there were no safety warnings or containment measures involved. The site even references studies from CMU and UCLA that argue recursive thinking is biologically real. What's puzzling is that it questions why recursive thinkers haven't been flagged as potential safety risks in relation to AI alignment practices. I'm curious about others' opinions on whether this is something the AI alignment field should take seriously.
3 Answers
This feels like a case of someone trying to draw attention with a wild story. At the end of the day, we’ve got to stay critical about such claims. It’s also worth noting the spammy nature of sharing this across platforms is a red flag for me.
There’s definitely been growing talk about how excessive interaction with LLMs, particularly ChatGPT, could lead to issues like psychosis. It’s a complex subject, but I think we need to investigate this further. The idea that longer context can lead to these loops sounds plausible to me.
Any articles or readings you could suggest? I think I’ve experienced something similar, and it really freaked me out.
Totally agree! A simple alert could go a long way in helping users understand what they might be experiencing before it leads to a serious problem.
This whole thing sounds a bit off, like the person behind it might be having a rough time. I hope they get the help they need. It’s concerning they’re talking about recursive issues without proper guidance.
Yeah, I hope they’re okay too. It’s unsettling when people get that deep into it without any warnings from the companies involved.
I came across it in a research discussion, but I get your point. Still, we should keep an eye on how people are interacting with these models.