Person sitting with back to camera clasping hands at desk with glowing computer screen and stacked documents in view

ChatGPT: 11 Reasons to Avoid Relying on It

At a Glance

  • ChatGPT is useful for drafting questions and translating jargon, but it cannot diagnose illnesses or order labs.
  • It offers mental-health grounding techniques, yet it lacks genuine empathy and legal safeguards.
  • In emergencies, the chatbot should be a post-incident explainer, not a first responder.

ChatGPT has become almost impossible to avoid, and for many, it is slowly replacing Google and other search engines. While the chatbot can be helpful for creative writing or brainstorming, it is not a reliable source for medical, legal, financial, or safety advice. Below are 11 specific areas where relying on ChatGPT can lead to serious consequences.

Health and Safety

  • Diagnosing physical health issues

A user entered a lump on their chest into the chatbot and received a cancer warning. The user’s licensed doctor later explained the lump was a lipoma, which occurs in one in 1,000 people. While the chatbot can help draft questions for a doctor, it cannot perform an examination or provide malpractice coverage.

  • Taking care of your mental health

ChatGPT can suggest grounding techniques, but it cannot pick up the phone when a crisis occurs. A journalist at News Of Los Angeles found the chatbot mildly helpful for grief, provided its limits were kept in mind. However, it has no lived experience, cannot read body language, and offers only simulated empathy. In a crisis, dialing 988 in the US or a local hotline is the safest option.

  • Making immediate safety decisions

If a carbon-monoxide alarm chirps, the chatbot cannot smell gas or dispatch emergency crews. In a real-world emergency, typing to the chatbot delays evacuation and dialing 911. The model is best used afterward to explain what happened, not to decide how to act in the moment.

Finance, Legal, and Confidentiality

CPA reviewing financial planning spreadsheet with debt income tax bracket columns and a chatbot showing outdated data.
  • Getting personalized financial or tax planning

The chatbot can explain what an ETF is, but it lacks knowledge of a user’s debt-to-income ratio, state tax bracket, or filing status. Its guidance may be outdated because its training data may not include the latest tax year. When real money and deadlines are involved, a CPA is the appropriate professional.

  • Dealing with confidential or regulated data

Journalists and others who receive embargoed press releases or client contracts risk exposing that text to a third-party server. The same applies to medical charts, Social Security numbers, or bank routing information. Once data is in the prompt, it may be stored or used to train future models. If it cannot be pasted into a public Slack channel, it should not be pasted into ChatGPT either.

  • Doing anything illegal

This is self-explanatory. The chatbot can provide instructions that facilitate illegal activity, and using it for that purpose is a violation of law.

  • Drafting a will or other legally binding contract

The chatbot can break down basic concepts, such as a revocable living trust, but it cannot draft enforceable legal text. Variations in state and county law mean that missing a witness signature or notarization clause can invalidate a document. A checklist created with the chatbot should be reviewed by a lawyer.

Academic and Entertainment Use

  • Cheating on schoolwork

The chatbot’s output is detectable by AI-plagiarism tools. Professors can identify the “ChatGPT voice,” and penalties can include suspension or expulsion. Using the chatbot as a study partner is safer than letting it write assignments.

  • Monitoring information and breaking news

With ChatGPT Search launched in late 2024 and opened to everyone in February 2025, the model can fetch fresh web pages and real-time numbers. However, it does not stream continuous updates; each refresh requires a new prompt. Live data feeds and official press releases remain more reliable for speed-critical information.

  • Gambling

The chatbot can hallucinate player statistics or misreport injuries. A user’s successful parlay was due to double-checking claims against real-time odds. Because it cannot predict future outcomes, it should not be relied upon for betting decisions.

  • Making art

While the chatbot can help brainstorm ideas or write headlines, it should not be used to create art that is then presented as original work. The tool is a supplement, not a substitute for human creativity.

Key Takeaways

  • ChatGPT is a powerful assistant for drafting and brainstorming but is not a substitute for professional expertise.
  • In medical, legal, financial, or safety contexts, a qualified human professional should be consulted first.
  • For emergencies, the chatbot should be consulted after the situation is resolved, not before.
  • When dealing with confidential data, be aware that input may become part of the model’s training set.
  • Academic integrity requires that students use the chatbot for learning, not for completing assignments.

Author

  • My name is Daniel J. Whitman, and I’m a Los Angeles–based journalist specializing in weather, climate, and environmental news.

    Daniel J. Whitman reports on transportation, infrastructure, and urban development for News of Los Angeles. A former Daily Bruin reporter, he’s known for investigative stories that explain how transit and housing decisions shape daily life across LA neighborhoods.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *