Why more AI doesn’t automatically mean more trust.
We are living through the peak of the AI hype cycle. Trillions of dollars are being poured into infrastructure, tools, and promises of productivity.
But beneath the optimism sits a quieter problem: credibility erosion.
More AI-generated information doesn’t automatically lead to better outcomes. In many cases, it does the opposite.
This article is a curated and abridged reflection on a talk by Eva Digital Trust, exploring how genAI, when used carelessly, can quietly undermine trust, expertise, and brand credibility and what to do instead.
👉 FULL CREDIT FOR THIS PIECE at the bottom of this post.
1. AI hype doesn’t equal value:
AI investment numbers are staggering, but hype alone doesn’t deliver ROI. When use cases are vague and productivity gains don’t materialise, pressure builds, especially on leaders, to prove AI is “working.”
The problem isn’t AI itself. It’s deploying it without clarity, strategy, or accountability.
2. Hallucinations are a feature, not a bug:
Large language models don’t “know” facts, they predict patterns.
That means hallucinations are inherent to how they work.
The danger is subtle: outputs often sound confident, structured, and professional, while quietly being wrong, irrelevant, or misaligned with context, regulation, or real constraints.
This leads to what’s now called “workslop” polished-looking content that creates more rework, more risk, and more cost. You can’t slop your way to a credible strategy, product, or point of view.
3. Visible AI use can trigger bias:
Research shows that openly disclosing AI use can lower perceptions of competence, particularly for women, older workers, neurodivergent professionals, and people writing in a second language.
AI may aim for neutrality. Humans do not. This means credibility isn’t just about whether you use AI, but how visibly and how thoughtfully you use it.
4. Trust in AI is deeply divided:
Global trust in AI splits roughly into thirds: trust, distrust, and uncertainty. But the differences across demographics are stark. People in the Global South tend to be more optimistic. The Global North, particularly older professionals and women, is far more sceptical. If your audience is cautious, “AI-powered” messaging may actively backfire. Context matters.
5. The illusion of effort still shapes trust:
Studies show that work perceived as “high effort” is judged as more trustworthy, innovative, and creative — even when the output itself is identical. Right now, AI-generated content carries a perception of low effort. Fair or not, that perception affects how your work is judged.
So how do we use AI without losing credibility?
Use AI as an editor, not the author:
AI excels at structuring drafts, improving clarity, checking consistency, and reformatting content. It is far weaker at original insight, judgment, nuance, and voice. Start with what you know. Then let AI assist, not replace, your thinking. Wharton professor Ethan Mollick calls this working at the “jagged frontier”: begin with tasks where you understand the domain well, and expand outward as you learn where AI helps and where it fails.
Build AI-augmented teams, not AI-replaced ones:
The most effective organisations treat AI like a junior collaborator, trained, supervised, and guided with guardrails. Examples like McKinsey’s internal AI systems show how domain-specific knowledge, templates, and tone controls can dramatically improve usefulness while reducing risk. This is less about prompting perfection and more about system design.
Distinctiveness is your moat:
The internet was already crowded with sameness before generative AI. Now it’s flooded.
To stay credible:
- Clarify what you stand for
- Codify your standards (brand, legal, ethical)
- Communicate with intention using AI to refine, not originate
Consistency, original perspective, and a recognisable voice build trust over time.
Beware “AI-speak”:
Certain phrases, rhythms, and structures now instantly signal AI use. They are predictable, beige, and forgettable. If it doesn’t sound like something you’d say out loud to a client or colleague, it probably doesn’t belong in your work. AI is excellent at cutting fluff, if you tell it to. It is not good at creating language that truly resonates.
Final thought:
The path to credibility in an AI-saturated world is not avoidance, it’s intentional use.
Those who thrive won’t be the loudest adopters. They’ll be the ones who use AI to amplify their expertise, not outsource it. In a world drowning in slop, your voice, shaped by experience, judgment, and integrity remains irreplaceable.
Full credit and original thinking by Eva Digital Trust.
This abridged piece is shared with respect for the original work.