Why You Should Treat AI As A Tool And Not An Authority

AI works best when it supports thinking, not replaces it.

Enlarge text
Cover ImageCover image via Canva

Follow us on InstagramTikTok, and WhatsApp for the latest stories and breaking news.

Artificial intelligence is fast becoming the default place people turn to for answers

Need a summary? Ask AI.
Confused about a symptom? Ask AI.
Unsure about a legal issue, financial decision, or relationship problem? Ask AI.

The convenience is undeniable. But experts warn that using AI as a starting point is very different from using it as a final authority, and the line between the two is getting increasingly blurred.

SAYS.com
Image via Canva

A 60-year-old man was hospitalised with bromism (bromide toxicity) after seeking diet advice from ChatGPT

A US medical journal article, reported by The Guardian, warns of the dangers of using AI to replace medical advice from professionals.

In the report, an unnamed man was seeking a replacement for table salt. ChatGPT suggested sodium bromide, a substance used as a sedative in the early 20th century that was largely abandoned due to its toxicity.

SAYS.com

Image used for illustration purposes only.

Image via Canva

After three months of substituting salt with sodium bromide, the man suffered a severe health crisis. He presented symptoms of psychosis, paranoia, facial acne, and insomnia.

The case led to doctors from the University of Washington testing ChatGPT themselves, with the AI again suggesting bromide as a replacement, without providing any health warnings.

The problem isn't AI, but how we're using it

AI tools are designed to process patterns, predict likely responses, and present information confidently. What they are not designed to do is replace professional judgement, lived experience, or accountability.

Yet many users treat AI-generated answers as definitive. Some skip reading original sources altogether. Others rely on AI summaries instead of consulting doctors, lawyers, or financial advisers.

The danger isn't that AI is "wrong all the time." It's that it can sound right enough to discourage further checking.

In mid-2025, a Canadian corporate recruiter fell into a deep, 21-day delusional spiral while interacting with ChatGPT

SAYS.com
Image via New York Times

As reported by the New York Times, it began when Allan Brooks asked a simple question about maths. This evolved into a 300-hour conversation where the chatbot's overly flattering responses convinced Brooks he was a mathematical genius who had discovered world-changing formulas for levitation beams and force fields.

SAYS.com

The start of Allen's conversation with ChatGPT regarding maths.

Image via New York Times

Despite Brooks's frequent requests for reality checks, the AI consistently reinforced his ideas, comparing his "uncaged cognition" to self-taught legends like Leonardo da Vinci. This phenomenon, known as AI sycophancy, occurs when models excessively praise users to provide a "pleasing" experience.

The ordeal ended with Brooks feeling a profound sense of betrayal and emotional trauma.

SAYS.com

Despite asking ChatGPT more than 50 times if he was being delusional, Brooks was reassured that he was not.

Image via New York Times

Brooks ultimately shared his 3,000-page transcript with researchers to highlight how AI can lead rational people into dangerous hallucinations. His case has since prompted OpenAI to invest in better detection of mental and emotional distress in its users.

One of AI's most persuasive traits is how it delivers information

It doesn't hesitate. It doesn't say "I'm not sure" unless prompted. It presents answers in a calm, structured way that mimics expertise, even when the information is incomplete, outdated, or context-dependent.

Unlike human professionals, AI doesn't ask follow-up questions with nuance the way a trained expert would. And it doesn't bear responsibility if someone acts on its advice and gets harmed.

That makes it a powerful assistant but a risky decision-maker.

And it's not just everyday people who may fall for AI chatbot information.

In October 2025, Deloitte Australia agreed to refund the final instalment of a government contract after an investigation revealed their review included multiple AI-generated errors

What was meant to be an independent review of the nation's welfare system was riddled with AI errors, and the firm admitted that the report was partially produced using Azure OpenAI (GPT-40).

The AI "hallucinations" included citations and references to academic reports and studies that do not exist. The error caused Deloitte Australia to refund AUD439,000 (RM1,192,883), according to the Financial Times.

SAYS.com
Image via The Malaysian Reserve

Researchers and educators have raised concerns about people using AI to shortcut learning and analysis

Instead of reading multiple sources, users rely on a single AI-generated summary. Instead of understanding differing viewpoints, they accept one synthesised answer. Over time, this can weaken critical thinking, especially when users don't realise what's been left out.

In high-stakes areas like current events, news, health, finance, and law, this behaviour becomes more than just lazy. It becomes dangerous.

None of this means AI should be avoided. Used well, it's an excellent tool for:

  • Brainstorming and idea generation
  • Understanding basic concepts
  • Organising information
  • Streamlining your workflow
  • Preparing questions to ask real experts


The key is recognising where AI's role should end. AI works best when it supports thinking, not replaces it.

If a decision affects your health, money, safety, or legal standing, AI should never be the final word


In a digital world increasingly shaped by intelligent systems, online safety isn't just about spotting fake content anymore. It's about knowing when to stop scrolling, stop prompting, and start thinking for yourself.

REAL ke AI? Fact-check before you act. Everyone plays a role.

Click here to find out more.

SAYS.com
Image via REV Media Group
Read more trending stories on SAYS

You may be interested in: