Truth or Trickery
How to Spot Misinformation and Disinformation
This piece is a demonstration of how I break down complex claims to reveal assumptions, missing context, and framing errors.
We live in a world saturated with information: headlines, dashboards, expert statements, performance claims, charts, footnotes, and fine print. The challenge today isn’t access to information—it’s knowing what to trust.
Some information is wrong.
Some information is technically true while being strategically misleading.
Some information is crafted to steer your thinking while appearing neutral.
This is the difference between misinformation and disinformation.
The Problem: Technically True, Practically Misleading Information
The most dangerous information today isn’t an obvious lie.
It’s information that’s just accurate enough to pass scrutiny while quietly shaping perception.
It sounds reasonable.
It feels objective.
It discourages follow-up questions.
And that’s exactly the problem.
The Performance Claim That Starts the Question
Consider a statement you see constantly in business, finance, and leadership contexts:
“This investment has outperformed the market.”
At first glance, it sounds impressive—and reassuring.
But if you slow down, the questions start piling up:
Over what time period?
Against which benchmark?
With what level of risk?
Before or after fees?
Compared to what baseline?
Was this performance consistent or driven by a single outlier period?
The statement may be factually correct—and still misleading.
Not because it’s false.
Because it’s incomplete.
When Language Obscures Instead of Clarifies
Notice how much work the phrase “outperformed the market” asks the listener to silently do.
Which market?
Which index?
Which assumptions?
Vague performance language shifts the burden of interpretation onto the audience—who often assumes the most favorable meaning.
This isn’t necessarily malicious.
But it is a known and repeatable tactic: use familiar language that sounds precise while leaving critical variables undefined.
Complexity doesn’t always appear as jargon.
Sometimes it appears as overconfidence wrapped in simplicity.
“The Data Looked Right” — Until It Didn’t
A common failure mode in organizations isn’t bad data.
It’s bad assumptions hidden inside good-looking data.
Many costly business failures trace back to spreadsheet models that were internally consistent—but structurally wrong.
Inputs were reasonable
Calculations were correct
Charts were clean
Conclusions were confident
The failure wasn’t intelligence.
It was process.
Assumptions went untested.
Edge cases were ignored.
Dependencies were simplified away.
The result wasn’t misinformation by ignorance—it was misinformation by omission.
Why This Matters Beyond Finance
If a simple performance claim can conceal assumptions, risk, and context, consider what happens with:
Strategic forecasts
Operational metrics
AI-generated summaries
Market research reports
Executive dashboards
Policy recommendations
The more complex the system, the more dangerous oversimplified truth becomes.
And once a conclusion lands—especially from a trusted source—it tends to stick. Even when corrections appear later, most people don’t revisit the original belief.
That’s not an accident of human psychology.
It’s a predictable pattern.
The Solution: How to Spot Misinformation and Disinformation
You don’t need to distrust everything.
You need to question better.
1. Break Statements Down Word by Word
Even true statements can mislead by what they leave out.
Take a claim like “This investment has outperformed the market.” Ask:
Over what time frame?
Compared to which benchmark?
At what level of volatility or downside risk?
Before or after costs?
Under what assumptions?
Precision reveals whether clarity actually exists.
2. Ask: Why Am I Being Shown This?
Information doesn’t appear randomly.
Ask:
Who benefits if I accept this claim?
What decision does this push me toward?
What questions does it quietly discourage?
Why is this being emphasized instead of something else?
Motivation often explains framing better than facts alone.
3. Distinguish Absence of Proof from Proof of Absence
Statements like:
“There’s no evidence this caused harm.”
“The model shows no significant downside.”
Do not mean:
“This is safe.”
“This risk doesn’t exist.”
They often mean uncertainty was bounded narrowly and framed favorably.
That gap is where disinformation lives.
4. Watch for Strategic Simplicity
When complex systems are explained with:
One metric
One chart
One confident sentence
Pause.
Clarity is rarely accidental.
Neither is oversimplification.
5. Question Neutral-Sounding Labels
Words like:
“Adjusted”
“Normalized”
“Assumes stable conditions”
“Other factors”
Aren’t wrong—but they often hide where judgment calls were made.
If something truly didn’t matter, it wouldn’t need a label.
6. Ask the Most Important Question of All
Is this information empowering me—or managing me?
Empowering information:
Improves understanding
Makes tradeoffs visible
Encourages better questions
Managing information:
Closes inquiry
Shifts responsibility
Creates premature certainty
Why Transparency Still Matters
The solution to misinformation isn’t less speech.
It’s better speech.
That means:
Clear assumptions
Transparent benchmarks
Open debate
Willingness to show the work
Many improvements in safety, finance, and decision-making exist today only because someone questioned a technically correct but misleading claim.
You don’t need to become cynical.
You don’t need to assume bad intent.
You need to become precise.
Question wording.
Question assumptions.
Question incentives.
Question what’s highlighted—and what’s missing.
That’s how you move from manipulation to understanding.
Daniel Stih is a systems thinker who explores how complex problems actually work. His writing focuses on clarity, hidden assumptions, and the difference between structure and process across domains like engineering, design, and creative work.
More at: firstascentthinking.com


