I've had a couple of frustrating experiences where I've wasted time pursuing incorrect information given by ChatGPT. It seems to always present answers with a sense of certainty, even when the data may not be reliable. I'm wondering how we might train it to use phrases like "I'm not sure, but the data suggests..." or "I can't find enough information to be certain..." This way, we could better gauge the reliability of the information and act accordingly. Has anyone else thought about this or found ways to adjust how ChatGPT presents uncertainty?
1 Answer
That's an interesting point! A large language model (LLM) like ChatGPT doesn't really process uncertainty like humans do. It predicts the next word based on context. This can often lead to overly assertive responses. Being explicit about your expectations might help it communicate more uncertainly when you need that.
Related Questions
xAI Grok Token Calculator
DeepSeek Token Calculator
Google Gemini Token Calculator
Meta LLaMA Token Calculator
OpenAI Token Calculator