More concise chatbot responses tied to increase in hallucinations, study finds

You may like the chatbot’s response, but that doesn’t mean it’s true.

Asking any of the popular chatbots to be more concise “dramatically impact[s] hallucination rates,” according to a recent study.

French AI testing platform Giskard published a study analyzing chatbots, including ChatGPT, Claude, Gemini, Llama, Grok, and DeepSeek, for hallucination-related …

Leave a Reply

Your email address will not be published. Required fields are marked *