Hey everyone! I'm not an expert in AI, but I've noticed that while AI does pretty well with general knowledge, it often falls short in specialized areas where I have more expertise. I'm curious if there's a way to prompt an AI so that it reviews and verifies its answers before presenting them to users. Is it possible for it to continually check for inaccuracies until it can produce a credible response?
3 Answers
Totally get where you’re coming from! Some of the answers really miss the mark, especially in niche fields. For instance, when I asked about bassoon reed measurements, it gave me answers that were outright ludicrous! It’s wild that it doesn't always give accurate info and sometimes fabricates details, including false names of users in the bassoon community. It feels like there should be a system where you could run the AI's response through another AI for fact-checking before accepting it.
Unfortunately, there isn't a method to completely ensure that AI can fact-check itself before showing results. The systems already have certain instructions to avoid making up answers when they aren't sure, but this can sometimes make them overly cautious and lead to responses like 'I don’t know' even when they really could provide information. It's frustrating because there are cases where prompting like asking the model to break down its thought process can help a bit, but it’s not a fix-all solution for the inaccuracies.
Actually, you might be onto something! The o-series models are designed with more capabilities, but it still doesn't guarantee perfect verification. I once asked a variant of your question and got the reply that 'prompt engineering can't force self-verification' since every output is essentially a separate task without a guaranteed truth source. It's a bit of a hit-or-miss situation.
Related Questions
xAI Grok Token Calculator
DeepSeek Token Calculator
Google Gemini Token Calculator
Meta LLaMA Token Calculator
OpenAI Token Calculator