Can you trust AI for health advice?

Mar 11 1:36pm | Colleen Young, Connect Director | @colleenyoung | Comments (19)

Man working on computer at home with dog

Written by: Mayo Clinic Staff

AI can answer health questions in seconds. But should you trust it with your symptoms? Here's what to know before you rely on it.

Imagine you've been feeling tired for weeks. Your usual strategies, like rest and extra coffee, aren't helping. Before deciding whether to schedule an appointment with a healthcare professional, you open an artificial intelligence (AI)-powered chat tool and type, "What health conditions cause fatigue?"

Within seconds, a list of answers appears. It includes stress, anemia, thyroid issues, depression, chronic illness and cancer. The information feels organized and sounds accurate — and a little scary.

But is this answer trustworthy? And what should you do with this information?

AI-created health information is widely used

Nearly 8 in 10 adults in the U.S. turn to the internet for answers to health questions. Instead of scrolling through websites, many people find answers in the AI-generated summary that appears at the top of their search results. (1)

But it doesn't stop there. More than 1 in 5 adults worldwide are turning directly to AI chatbots, like ChatGPT and Gemini, to ask health questions. (2) It's fast, convenient and free. But Mayo Clinic experts warn that AI-generated information isn't always reliable or accurate.

Why you can't always trust AI

When it comes to using AI for health information, there are a few key limitations to keep in mind:

1. Diagnosing and treating illness is too complex for a machine

AI tools don't have access to your full medical record — and you shouldn’t upload or share it with them. These tools can't examine you or run tests the way a healthcare professional can. They don't have the ability to reason like a human or explain how they came to a conclusion. (3) These qualities are necessary for making safe and accurate medical decisions.

2. AI can be wrong, even when it sounds confident

AI chatbots give answers based on patterns in data. They don't "know" facts in the way a health professional does. Sometimes AI information sounds true but is completely incorrect. This is known as hallucination. (4, 5) For example, when asked how to get more minerals from food, AI has been known to recommend eating rocks. (6)

3. AI-created information may be biased

AI systems are trained on large amounts of data that may contain bias or gaps. That means it may not reflect everyone's experience fairly. (7, 8) For example, an AI system that learned from information about people in the United States and parts of Europe might miss signs of depression. That's because it doesn't know that in some cultures, people show sadness through physical symptoms like headaches or tiredness, rather than talking about their feelings.

4. AI spreads misinformation

AI doesn't know what's true and what's not. (6) It may pull answers from flawed or misleading sources it finds online. When people see alarming health stories online, they often forward or repost them — even if they know the information may not be true. As false information spreads, there’s more of it online. AI may then repeat that false information. (9)

5. People and AI don't communicate well together

One study found that people seeking health information don't tend to give AI chatbots enough specific information for clear, accurate answers. (4) And small differences in how symptoms are described can completely change the answer from AI, making it less accurate. (4,5) For example, when two people asked about the same symptoms but used different words, AI told only one of them the correct answer, which was to get emergency care. (4)

How to use AI more safely and effectively

If you still want to use AI to learn about a health topic, here are practical steps to reduce risk:

  • Use AI for general education, not diagnosis. AI is best suited for explaining medical terms or giving general wellness advice. For example, you might ask, "What does hypertension mean in plain language?" or "How can I add more movement into my day?"
  • Cross-check everything. Verify information with trusted sources, like websites for Mayo Clinic or the American Medical Association. Most importantly, review what you learn with your healthcare team. (6)
  • Ask clear, specific questions. Instead of asking, "Is coughing a bad sign?" try, "What are common causes of chronic dry cough in adults?" Clear, focused questions tend to produce more useful and balanced answers. (6) Remember: Healthcare professionals are trained to ask the right follow-up questions. AI isn't. (4)
  • Protect your privacy. Don't share personal information, like your full name, date of birth, address, medical records or insurance details. Even health details should be shared cautiously, especially on public or free platforms. (6)

The bottom line

Think of AI as a research assistant, not as your healthcare professional. It can be a helpful tool for summarizing ideas or getting big-picture information. AI can be extremely useful in preparing questions to ask your care team. But when it comes to your health, the safest and most effective decisions are still made with a trusted healthcare professional.

Related links:

References

  1. Many in U.S. consider AI-generated health information useful and reliable. Annenberg Public Policy Center. https://www.annenbergpublicpolicycenter.org/many-in-u-s-consider-ai-generated-health-information-useful-and-reliable/. Accessed Feb. 10, 2026.
  2. Yun HS, et al. Online health information-seeking in the era of large language models: Cross-sectional web-based survey study. Journal of Medical Internet Research. 2025; doi:10.2196/68560.
  3. Ullah E, et al. Challenges and barriers of using large language models (LLM) such as ChatGPT for diagnostic medicine with a focus on digital pathology — A recent scoping review. Diagnostic Pathology. 2024; doi:10.1186/s13000-024-01464-7.
  4. Bean AM, et al. Reliability of LLMs as medical assistants for the general public: A randomized preregistered study. Nature Medicine. 2026; doi:10.1038/s41591-025-04074-y.
  5. Giorgi S, et al. Evaluating generative AI responses to real-world drug-related questions. Psychiatry Research. 2024; doi:10.1016/j.psychres.2024.116058.
  6. What doctors wish patients knew about using AI for health tips. American Medical Association. ama-assn.org/practice-management/digital-health/what-doctors-wish-patients-knew-about-using-ai-health-tips. Accessed Feb. 17, 2026.
  7. Yoon SC, et al. Digital psychiatry with chatbot: Recent advances and limitations. Clinical Psychopharmacology and Neuroscience. 2025; doi:10.9758/cpn.25.1346.
  8. Thakkar A, et al. Artificial intelligence in positive mental health: A narrative review. Frontiers in Digital Health. 2024; doi:10.3389/fdgth.2024.1280235.
  9. Saeidnia HR, et al. Generative AI and health misinformation: Production, propagation, and mitigation — A systematic review. BMC Public Health. 2026; doi:10.1186/s12889-025-26148-9.

Interested in more newsfeed posts like this? Go to the About Connect: Who, What & Why blog.

This is excellent. Thank you for posting this @colleenyoung

REPLY

Hello, Colleen (@colleenyoung)

How timely! This is well worth anyone's careful read. Thank you for making it available to all of us Mayo Connecters!

Cheers!
Ray (@ray666)

REPLY

Thank you for this informative post, @colleenyoung! I’ve run across so many inaccuracies while comparing AI generated responses, often appearing at the top of the search engine, with the professional papers that I’ve researched for replies on Connect. For that reason, I don’t use the AI responses. They have merit but as yet, I don’t feel they are reliable/credible sources.

REPLY

Very sensible.

AI is artificial, all right; the "intelligence" part is questionable.

Basically, it sorts gigantic amounts of data; if the input is incomplete, subjective, or tainted, the output won't be any better, just faster.

That's okay, for what it is. It's when people trust it too much that we get into trouble.

A lot of the "AI slop" we're seeing today will eventually be resolved (I hope; I recently saw an AI image of a human with six fingers). But need to evolve our natural intelligence to recognize its limitations, the same way we learned that just because something is on the internet doesn't mean it's true.

REPLY

I think I’ll go eat a rock.

REPLY
Profile picture for Scott R L @scottrl

Very sensible.

AI is artificial, all right; the "intelligence" part is questionable.

Basically, it sorts gigantic amounts of data; if the input is incomplete, subjective, or tainted, the output won't be any better, just faster.

That's okay, for what it is. It's when people trust it too much that we get into trouble.

A lot of the "AI slop" we're seeing today will eventually be resolved (I hope; I recently saw an AI image of a human with six fingers). But need to evolve our natural intelligence to recognize its limitations, the same way we learned that just because something is on the internet doesn't mean it's true.

Jump to this post

Thanks Scott,

Just like a human growing from infancy to adulthood, and all the places in between, AI follows a process that prominently includes collecting and adding experiences to memory.

Over the course of a life, we make different decisions with more experience (information).

I find AI useful, for the most part, but would not rely on it as the sole source for medical advice. I use Gemini 3.1 and it usually asks if I'd like to list questions to ask a real physician.

I think when dealing with AI and health/medical issues, this will always be the best approach.

Joe

REPLY

I agree with others who have posted here. This is an excellent cautionary posting. In researching some of my own health issues, I have noted the effects of poorly structured (or overly general) questions on the AI answers and results.

REPLY

Years and years (and years) ago when I was working on my MS degree, there was metadata. Defined from the Merriam Webster Dictionary as "data that provides information about other data." Metadata's information is a gathering of sources on a topic and makes data easier to find and use. Humans were the sole information gatherers for metadata and needed to research and evaluate findings. AI information is gathered from metadata; AI gathers, interprets and summarizes. We humans still need to do the evaluation by using our critical thinking skills, probably more than ever.

REPLY

I agree with everything. I am as careful as I can be to educate myself. However I also believe it helps me understand the complexities of many of these diseases. I self diagnosed PMR using AI. Then I had to convince a Dr to agree with me. AI has the advantage of knowing everything ever published. It can come to you in seconds. Both good and bad. As long as you are continue to screen the data it can give you a helpful response. You and Dr can then discuss whether its good for you. I think its here to stay. I might as well use it.

REPLY
Please sign in or register to post a reply.