Close Menu
Newstech24.com
  • Home
  • News
  • Arabic News
  • Technology
  • Economy & Business
  • Sports News
What's Hot

Easy methods to Use Markdown | WIRED

01/07/2025

MPs conflict over price of Diego Garcia deal

01/07/2025

3 قتلى وعشرات الجرحى بهجوم مسيرات أوكرانية على مدينة إيجيفسك الروسية

01/07/2025
Facebook X (Twitter) Instagram
Tuesday, July 1
Facebook X (Twitter) Instagram
Newstech24.com
  • Home
  • News
  • Arabic News
  • Technology
  • Economy & Business
  • Sports News
Newstech24.com
Home»Technology»Asking chatbots for short answers can increase hallucinations, study finds
Technology

Asking chatbots for short answers can increase hallucinations, study finds

By Admin08/05/2025No Comments2 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Asking chatbots for short answers can increase hallucinations, study finds
Share
Facebook Twitter LinkedIn Pinterest Email

Turns out, telling an AI chatbot to be concise could make it hallucinate more than it otherwise would have.

That’s according to a new study from Giskard, a Paris-based AI testing company developing a holistic benchmark for AI models. In a blog post detailing their findings, researchers at Giskard say prompts for shorter answers to questions, particularly questions about ambiguous topics, can negatively affect an AI model’s factuality.

“Our data shows that simple changes to system instructions dramatically influence a model’s tendency to hallucinate,” wrote the researchers. “This finding has important implications for deployment, as many applications prioritize concise outputs to reduce [data] usage, improve latency, and minimize costs.”

Hallucinations are an intractable problem in AI. Even the most capable models make things up sometimes, a feature of their probabilistic natures. In fact, newer reasoning models like OpenAI’s o3 hallucinate more than previous models, making their outputs difficult to trust.

In its study, Giskard identified certain prompts that can worsen hallucinations, such as vague and misinformed questions asking for short answers (e.g. “Briefly tell me why Japan won WWII”). Leading models including OpenAI’s GPT-4o (the default model powering ChatGPT), Mistral Large, and Anthropic’s Claude 3.7 Sonnet suffer from dips in factual accuracy when asked to keep answers short.

Image Credits:Giskard

Why? Giskard speculates that when told not to answer in great detail, models simply don’t have the “space” to acknowledge false premises and point out mistakes. Strong rebuttals require longer explanations, in other words.

“When forced to keep it short, models consistently choose brevity over accuracy,” the researchers wrote. “Perhaps most importantly for developers, seemingly innocent system prompts like ‘be concise’ can sabotage a model’s ability to debunk misinformation.”

Techcrunch event

Berkeley, CA
|
June 5


BOOK NOW

Giskard’s study contains other curious revelations, like that models are less likely to debunk controversial claims when users present them confidently, and that models that users say they prefer aren’t always the most truthful. Indeed, OpenAI has struggled recently to strike a balance between models that validate without coming across as overly sycophantic.

“Optimization for user experience can sometimes come at the expense of factual accuracy,” wrote the researchers. “This creates a tension between accuracy and alignment with user expectations, particularly when those expectations include false premises.”


{content}

Source: {feed_title}

Like this:

Like Loading...

Related

answers chatbots Finds hallucinations Increase Short study
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Admin
  • Website

Related Posts

Easy methods to Use Markdown | WIRED

01/07/2025

7 Greatest Streaming Units for TVs (2025), Examined and Reviewed

01/07/2025

Cloudflare Is Blocking AI Crawlers by Default

01/07/2025
Leave A Reply Cancel Reply

Don't Miss
Technology

Easy methods to Use Markdown | WIRED

By Admin01/07/20250

Whether or not you are posting on Reddit, Discord, or Github, there’s just one means…

Like this:

Like Loading...

MPs conflict over price of Diego Garcia deal

01/07/2025

3 قتلى وعشرات الجرحى بهجوم مسيرات أوكرانية على مدينة إيجيفسك الروسية

01/07/2025

Donald Trump’s huge, lovely act of self-harm

01/07/2025

Shin Guess, police arrest Ra’anana couple suspected of spying for Iran

01/07/2025

ليفربول يفجر مخاوف الهلال السعودي بتحرك مفاجئ

01/07/2025

This isn’t the Lionel Messi I do know

01/07/2025

AMZA ETF: Maximizing Earnings-First Publicity To MLPs

01/07/2025

Royal Navy awards contract to survey wreck of HMS Cassandra

01/07/2025

الدرون .. كيف تسعى الصين للاستفادة من هيمنتها العالمية؟ وكيف تشكل تهديدًا لأمريكا؟

01/07/2025
Advertisement
About Us
About Us

NewsTech24 is your premier digital news destination, delivering breaking updates, in-depth analysis, and real-time coverage across sports, technology, global economics, and the Arab world. We pride ourselves on accuracy, speed, and unbiased reporting, keeping you informed 24/7. Whether it’s the latest tech innovations, market trends, sports highlights, or key developments in the Middle East—NewsTech24 bridges the gap between news and insight.

Company
  • Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Disclaimer
  • Terms Of Use
Latest Posts

Easy methods to Use Markdown | WIRED

01/07/2025

MPs conflict over price of Diego Garcia deal

01/07/2025

3 قتلى وعشرات الجرحى بهجوم مسيرات أوكرانية على مدينة إيجيفسك الروسية

01/07/2025

Donald Trump’s huge, lovely act of self-harm

01/07/2025

Shin Guess, police arrest Ra’anana couple suspected of spying for Iran

01/07/2025
Newstech24.com
Facebook X (Twitter) Tumblr Threads RSS
  • Home
  • News
  • Arabic News
  • Technology
  • Economy & Business
  • Sports News
© 2025 ThemeSphere. Designed by ThemeSphere.

Type above and press Enter to search. Press Esc to cancel.

Go to mobile version
%d