I recently asked ChatGPT to provide a brief summary and some key academic references for a specific topic. Unfortunately, it made up a reference that didn't exist. I checked the supposed journal issue where the article was cited and found nothing there. Additionally, the DOI link it provided led to a completely unrelated article. I even found that ChatGPT fabricated two references in total during the conversation. How can I adjust my prompts to get better, more accurate sources? Here's my original prompt for context:
>Hi! Could you give me a summary and key academic sources for discussing the ?
4 Answers
I suggest using a clearer prompt structure each time. For instance, try saying something like, 'Please provide a summary about and ensure the references you give are real and can be found online.' This can make it more likely for the AI to stick to credible sources.
That seems frustrating! If you're unsure what that gatekeeper prompt is about, perhaps just focus on defining your expectations clearly! Inform the model what you mean by 'key academic sources' and emphasize looking for peer-reviewed papers directly.
One way to improve your interactions is by explicitly asking ChatGPT to perform searches for academic sources or verify the accuracy of its references. It might help clarify that you're looking for reliable studies, which could prompt a more careful response.
You’re not alone! Source hallucination can be a common issue with ChatGPT. It’s best to cross-check any references it provides on your own. Building some critical thinking into your approach will help you navigate inaccuracies.
Related Questions
xAI Grok Token Calculator
DeepSeek Token Calculator
Google Gemini Token Calculator
Meta LLaMA Token Calculator
OpenAI Token Calculator