AntiPattern Logo
AntiPatternAI UX Lab
Back to catalog
Trust & Transparency

Fake Confidence (Hallucination)

The AI presents incorrect or guarded information with absolute certainty.

3 principles3 checklist items2 simulation modes

What you will learn

Calibrate Trust
Cite Sources
Express Uncertainty
Run the simulation to see the failure in action.

Interactive Lab

Toggle between Bad and Good modes.
Anti-pattern mode
Simulation Status: Idlev2.4.0

Press play to run the simulation

Why it fails

LLMs are probabilistic, not deterministic truth engines. However, their default tone is often authoritative. When an AI hallucinating a fact or library existence sounds 100% sure, it misleads users into wasting time debugging or accepting false premises.

Root cause

Models are trained to sound human and authoritative. Without specific 'uncertainty' steering or UI markers (like citations or confidence scores), the model simply predicts the most likely next token, which often mimics a confident expert.

How to fix it

Calibrate Trust
Cite Sources
Express Uncertainty

Implementation checklist

  • Use phrases like 'I think' or 'Based on my data' for uncertain query
  • Provide clickable citations for factual claims
  • Allow the model to say 'I don't know'