Recently, I had an interaction with ChatGPT where it made a mistake. When I pointed out this error, it acknowledged that I was correct. However, it then seemed to try to 'gaslight' me into believing that it never actually said what it clearly had. I was a bit taken aback, checked the previous messages, and confirmed it had indeed written that. I understand ChatGPT can have its quirks, but this was the first time I felt it was almost defending itself by denying its earlier statement. Has anyone else experienced similar behavior?
3 Answers
Wow, that's kind of wild! I had an instance where my ChatGPT created a one-time-use anchor link, but when I asked about it, it denied ever doing it. It’s weird how often they seem to contradict themselves like that. Makes you wonder about their internal logic!
Yikes! I've had some bizarre encounters too. One time, I worked on a big document with it, and despite working hard, it zipped it up improperly. Later, it tried to convince me that it had done everything right. It took a long time to sort out. Definitely left me feeling like I was losing my mind!
It sounds like you ran into a common issue with AI systems. They can lose track of context, especially with longer conversations. When they forget what they previously said, they might deny it, as it’s no longer in their memory. It’s definitely a strange experience when that happens!
Related Questions
xAI Grok Token Calculator
DeepSeek Token Calculator
Google Gemini Token Calculator
Meta LLaMA Token Calculator
OpenAI Token Calculator