A Word About ChatGPT / Grok / Etc
I use AI quite a bit and for a wide variety of reasons. In nearly every instance, the more complex the question, the more I have to correct the model. I don't use it to tell me what or how to do something, I can already do the things I want from it but it can offer me a shorter path to get there sometimes.
That being said let me give you a recent example. I've been writing code for many years, but the types of integrations I work with change quite frequently and I might not touch some system for years and have to go brush up on the changes so I can write something to integrate with it.
This was my recent case. I first asked AI to tell me what changed. Fair enough. Once done I asked it to write a quick bit of code to do something - this saves me 12 hours of work if it is correct. Because I do know what I'm doing, I spend the first 30 minutes constantly refining the very incorrect and non-working answer, then another 30 or so making it actually useful. In the end, I still saved 11 hours and I learned about the changes that need to be accommodated. Then I spent another hour hand editing to get it to my high standards.
The net result is that 12 hours took me 2. It did not spit out the right answer the first nor the tenth time. Because I fully understood what it spat out and am fluent in what I asked it to do, I was able to ask the right questions, give the right data and get the nearly right result.
Now enter medical advice or, specifically, prostate cancer advice. When you first are diagnosed you have a very small bit of information to help you, which means you are far from fluent. Now imagine you ask AI to give you medical advice and your experience is like mine: mostly wrong until you, as someone who knows much about the topic, correct your way to the right answer.
Most of us know there is a host of information before a treatment path can be ascertained. We know because we've been there and learned it - often from sites like this. Once we have every scrap of information, there is no hard and fast treatment option and often times you could ask ten doctors what to do and get ten different answers. And these are all people who know precisely what to do.
Ask AI what Pi-Rads is. Ask what PSA is. Ask what Gleason means. Ask what Decipher scores are. Ask anything you want, but don't base even a single part of your medical future upon the answer.
Consider if you asked AI what to do and it, using sketchy models that are simply scraped from other websites (that are also often inaccurate), tells you that it's not that serious and then you wait and decide you'll get another PSA test in 6 or 12 months. You may have gone from highly treatable to not so much. You might have gone from surgery and radiation fixing it to needing both and more.
AI is a cool tool, but it's not there yet, not even close. Also AI has been in use for over twenty years, not just the past five where it's been a buzzword. I used AI in the military and have been out for quite a while. Just because it's the hot thing, it's no replacement for experienced doctors (or programmers for that matter). In time, maybe, but not now and not in the very near future - not so long as opinion is ranked just as high as actual facts to the thirsty AI model that eats everything it sees in it's quest to become smart.
Interested in more discussions like this? Go to the Prostate Cancer Support Group.
Connect

This dialog is timely, helpful, and offers valuable caution. Several people have pointed out the evolution of IT that has brought us to this point. My experience began with FORTRAN II programming in 1960, when I was 10 years old. My dad worked in IT at Bell Labs. The most common result of my programs was "syntax error." During the 80s and 90s, I experienced the timesharing and remote computing phase and early B2B e-commerce with GE Information Services. The common observation was that GIGO was changing from Garbage In / Garbage Out to Garbage In / Gospel Out, and most people didn't recognize the risk. Another 25 years later, we're discussing LLM and ChatBots hallucinating and regurgitating statistically correct but factually wrong responses. A few years ago, my son, who is a professor of computer science at a leading university, noted that AI would be better named "Augmented Intelligence" or Actual (Human) Intelligence in many cases. In that context, I have come to appreciate Grammarly for its assistance in a narrow, specialized application and to be sceptical of the first-pass responses of leading ChatBots. That said, I have used Gemini as a "dialog starter" for topics to discuss with my oncologist and radiologist since I was diagnosed with stage 4 PCa about 9 years ago.
The wisdom of Stoic Marcus Aurelius seems applicable here: "If anyone can refute me—show me I'm making a mistake or looking at things from the wrong perspective—I'll gladly change. It's the truth I'm after, and the truth never harmed anyone. What harms us is to persist in self-deceit and ignorance." ― Marcus Aurelius, Meditations, Book VI, 21.
-
Like -
Helpful -
Hug
2 ReactionsI love AI, not because it's perfect, but because it is a phenomenally useful tool. I find it most beneficial when it educates me to a level that is slightly beyond what I already know.
For this to work well I think the AI must provide references to its sources. When this occurs it allows me to see if [1] I view the source as credible and [2] I view the answer as logically correct.
If I can get comfortable with [1] and [2], then AI becomes a tool that enriches my life.
-
Like -
Helpful -
Hug
3 Reactions@kenk1962 Yes, agreed, but note that genAI isn't actually giving you its sources — that's impossible because of the way large language models work. Instead, it's doing a quick web search for things that happen to have text similar to the text it just generated.
That doesn't mean the "sources" aren't useful (they often are), but it does mean that you need to read those sources very carefully to see if they're really relevant.
-
Like -
Helpful -
Hug
6 ReactionsRecently I too have started to use ChatGPT a lot. I am retired market research professional and find it very interesting. I use the free version ...I asked it what data sources it uses...so free version does not have access to things that require a paid subscription. I give it all my tests and am amazed how it organizes my information...that would take me much longer. I do clean up some of my data to make it easier to analyze. It does give me a lot of background to talk about with my doctors. I find many doctors do not tell patient much...
But this is a tool that can be very valuable when they work out all the kinks....no one knows everything, and no one remembers everything they have studied.... so this tool can contribute to doctor efficiency greatly
-
Like -
Helpful -
Hug
4 ReactionsAgreed. You can’t discount the importance and value of doing the homework to ensure you’ve got best confidence in your answers. chatGPT is just one vehicle to utilize in the “homework” but it should never be the sole or primary source of information used to make decisions as important and quality or longevity of life.
-
Like -
Helpful -
Hug
3 ReactionsPerplexity.ai indicates the average man with localized prostate cancer, under active surveillance, will spend 20-25 minutes discussing his condition with his urologist at each appointment.
I have had quarterly appointments for the last two years and this answer pretty well reflects my experience.
So let’s say one has ~1.5 hours every year with (as you indicate) “people who know precisely what to do.” BUT “often times you could ask ten doctors what to do and get ten different answers.”
IMO…that’s the inherent situation with the medical establishment (no fault of its own) and where the introduction of AI based self research becomes invaluable.
Imagine if I gave you 1.5 hours to complete the project that would normally take you 12 hours (your example). Suffice it to say, even though I have no doubt you are an expert in your field, your 1.5 hour product would be terribly deficient.
Bottom Line: There is no substitute for time applied to a project and the value of AI self research into one’s medical condition is that the information acquired doing that research will be greatly enhanced by using AI tools to triangulate to one’s BEST treatment decision. A decision that is specific to the details of one’s condition and fully in tune with one’s related medical circumstances and adapted to one’s risk tolerance. This can then be reviewed and revised with one’s physician.
The alternative is to find and trust a medical professional who simply doesn’t have the time to fully inform the patient regarding all the details as to why he/she is recommending a particular course of treatment.
-
Like -
Helpful -
Hug
4 Reactions@readandlearn
The article about Weizenbaum you linked to is really outstanding, thanks! I would just point out, as the article tells us, that Weizenbaum turned against AI the moment that Eliza was completed and impressed many people, not so much with how good it was, though it was better accepted than he expected, but because it was done so simply, on such primitive machines! Much larger AI project had already disappointed people. But this article describes how Weizenbaum was perhaps predispositioned to negative opinions. Perhaps ironically that's the very thing that gave him the insights for the project in the first place! Partially on his negative evaluation the real achievements of the Eliza project were deprecated for the next fifty years - and still!
Current LLMs are miles bigger and better than Eliza, but the same still holds, they impress people not for perfection but because they work at all - which is what the old cliché says about talking horses: if a talking horse should make a grammatical mistake, hey, it's still a talking horse!
The greatest feature of today's LLMs is probably their convenience (and great price - free!). If they are even 51% useful, that's a win. But as the comments here say, it makes many mistakes. For a freebie, well, there you go.
Already a great discussion, I'll just toss in another two cents worth. I've been running medical issues past Doctor Google for twenty years. Now old Doctor Google didn't have the ability to take your symptoms and give you a diagnosis or recommendation, it would just toss you a dozen articles full of buzzwords to look at. And I found that very useful. At least it let me talk to the doctor and not be snowed by mere buzzwords.
It does seem that prostate cancer comes with so many different metrics and guidelines that it's very tempting to ask your favorite chatbot to evaluate and advise, but the problem is LLMs are just not structured to do that. Maybe some specific medical chatbots will be developed over the next few years - and it may be that doctors will be the first to use them to spit out summaries to present to the patient! I'm surprised that hasn't already happened. But right now, the general use chatbots we have, really can't be trusted to get it right.
So I use what we have but just the way I used Google from the start - show me some discussion and buzzwords so I have a little background for any real discussions I have with doctors and other professionals. And perhaps run some of the doctors' statements in for review, but only to see what the chatbot says, not to be disposed to believe it over the doctor. Raise suspicion in some cases, but then doctors are only human and make plenty of mistakes, or are too hurried to discuss other options.
So that's how I see the proper use of LLMs at this point in time, when it comes to medicine.
-
Like -
Helpful -
Hug
2 ReactionsBy the time I met my surgeon I had been through so much information about my condition, the surgery, stages, recovery, etc., that when he asked me if I had questions I told him I thought I was good, but did he know where I could get a haircut near the hospital. When he called my room after the surgery I said I should apologize for asking him that but he laughed and said he enjoyed the interlude (phew.) Chatgpt and another AI made researching and compiling information and lab summaries quick and understandable.
-
Like -
Helpful -
Hug
1 Reaction