Tech

Sam Altman’s warning: Why you should question AI trust

Fast Company|Published

Surprising confession, ChatGPT’s CEO didn’t expect people to trust AI this Much

Image: RON AI

Would it be fair to say we live in the matrix?

A world where we turn to our smartphones for everything from tracking steps to managing chronic illnesses, it’s no surprise that artificial intelligence (AI) has quickly become a daily companion.

Need mental health support at 2am? There’s an AI chatbot for that. Trying to draft a tricky work email? AI has your back. But what happens when we lean so far into this tech that we forget to question it?

That’s exactly the concern raised by OpenAI CEO Sam Altman, the man behind ChatGPT himself. 

During a candid moment on the OpenAI Podcast earlier this month, Altman admitted, “People have a very high degree of trust in ChatGPT, which is interesting because AI hallucinates. It should be the tech that you don’t trust that much.”

Yes, the guy who helped create ChatGPT is telling us to be cautious of it.

But what does “AI hallucination” even mean?

In AI lingo, a “hallucination” isn’t about seeing pink elephants.

Yahoo reports that, in simple terms, an AI hallucination is when the machine gives us information that sounds confident but is completely false. 

Imagine asking ChatGPT to define a fake term like “glazzof” and it creates a convincing definition out of thin air just to make you happy. Now imagine this happening with real topics like medical advice, legal opinions, or historical facts. This is not a rare glitch either.

According to a study published by Stanford University’s Center for Research on Foundation Models, AI models like ChatGPT hallucinate 15% to 20% of the time, and the user may not even know. The danger lies not in the errors themselves, but in how convincingly the tool presents them.

Altman’s remarks are not merely cautionary but resonate as a plea for awareness. “We need societal guardrails,” Altman stated, emphasising that we are on the brink of something transformative. “If we’re not careful, trust will outpace reliability.”

Image: Pexels

Why do we trust AI so much?

Part of the reason is convenience. It's fast, polite, always available, and seemingly informed. Plus, tech companies have embedded AI into every corner of our lives, from the smart speaker in our kitchen to our smartphone keyboard.

But more than that, there’s a psychological comfort in outsourcing our decisions. Research indicates that people trust AI because it reduces decision fatigue.

When life feels overwhelming, especially post-pandemic, we lean into what feels like certainty, even if that certainty is artificial.

That mental shortcut is called "cognitive fluency".

The smoother information sounds, the more our brain tags it as true, a bias confirmed by a 2022 MIT-Stanford collaboration that tracked user interactions with chatbots in real time.

Reliance on questionable data isn't just an intellectual risk. It can snowball into:

  1. Decision fatigue:
  2. Medication errors, such as following an AI-generated supplement regimen that conflicted with their prescriptions.
  3. Amplified anxiety: When the easy answer eventually unravels, we feel betrayed and trust our judgment less, notes cognitive scientist Prof. Emily Bender of the University of Washington

Recent Pew Research data shows that 35% of U.S. adults have already used generative AI like ChatGPT for serious tasks, including job applications, health questions, and even parenting advice.

The risk of blind trust

Here’s where things get sticky. AI isn’t human. It doesn’t “know” the truth. It merely predicts the next best word based on vast amounts of data. This makes it prone to repeating biases, inaccuracies, and even fabricating facts entirely.

Mental health and tech dependency

More than just a tech issue, our blind trust in AI speaks volumes about our relationship with ourselves and our mental health. Relying on a machine to validate our decisions can chip away at our confidence and critical thinking skills.

We're already in an age of rising anxiety, and outsourcing judgment to AI can sometimes worsen decision paralysis.

The World Health Organisation (WHO) has also flagged the emotional toll of tech overuse, linking digital dependency to rising stress levels and isolation, especially among young adults. Add AI into the mix, and it becomes easy to let the machine speak louder than your inner voice.

Altman didn’t just throw the problem on the table; he offered a warning that feels like a plea:

“We need societal guardrails. We’re at the start of something powerful, and if we’re not careful, trust will outpace reliability.”

Here are three simple ways to build a healthier relationship with AI:

  1. Double-check the facts, don’t assume AI is always right. Use trusted sources to cross-reference.
  2. Keep human input in the loop, especially for big life decisions. Consult professionals (doctors, career coaches, financial advisors) when it matters most.
  3. Reflect before you accept, ask yourself: “Does this align with what I already know? What questions should I ask next?”

FAST COMPANY