A Word About ChatGPT / Grok / Etc

Posted by survivor5280 @survivor5280, 1 day ago

I use AI quite a bit and for a wide variety of reasons. In nearly every instance, the more complex the question, the more I have to correct the model. I don't use it to tell me what or how to do something, I can already do the things I want from it but it can offer me a shorter path to get there sometimes.

That being said let me give you a recent example. I've been writing code for many years, but the types of integrations I work with change quite frequently and I might not touch some system for years and have to go brush up on the changes so I can write something to integrate with it.

This was my recent case. I first asked AI to tell me what changed. Fair enough. Once done I asked it to write a quick bit of code to do something - this saves me 12 hours of work if it is correct. Because I do know what I'm doing, I spend the first 30 minutes constantly refining the very incorrect and non-working answer, then another 30 or so making it actually useful. In the end, I still saved 11 hours and I learned about the changes that need to be accommodated. Then I spent another hour hand editing to get it to my high standards.

The net result is that 12 hours took me 2. It did not spit out the right answer the first nor the tenth time. Because I fully understood what it spat out and am fluent in what I asked it to do, I was able to ask the right questions, give the right data and get the nearly right result.

Now enter medical advice or, specifically, prostate cancer advice. When you first are diagnosed you have a very small bit of information to help you, which means you are far from fluent. Now imagine you ask AI to give you medical advice and your experience is like mine: mostly wrong until you, as someone who knows much about the topic, correct your way to the right answer.

Most of us know there is a host of information before a treatment path can be ascertained. We know because we've been there and learned it - often from sites like this. Once we have every scrap of information, there is no hard and fast treatment option and often times you could ask ten doctors what to do and get ten different answers. And these are all people who know precisely what to do.

Ask AI what Pi-Rads is. Ask what PSA is. Ask what Gleason means. Ask what Decipher scores are. Ask anything you want, but don't base even a single part of your medical future upon the answer.

Consider if you asked AI what to do and it, using sketchy models that are simply scraped from other websites (that are also often inaccurate), tells you that it's not that serious and then you wait and decide you'll get another PSA test in 6 or 12 months. You may have gone from highly treatable to not so much. You might have gone from surgery and radiation fixing it to needing both and more.

AI is a cool tool, but it's not there yet, not even close. Also AI has been in use for over twenty years, not just the past five where it's been a buzzword. I used AI in the military and have been out for quite a while. Just because it's the hot thing, it's no replacement for experienced doctors (or programmers for that matter). In time, maybe, but not now and not in the very near future - not so long as opinion is ranked just as high as actual facts to the thirsty AI model that eats everything it sees in it's quest to become smart.

Interested in more discussions like this? Go to the Prostate Cancer Support Group.

Note that the Mayo Community has a policy on posting AI-generated info here (which you didn't do): https://connect.mayoclinic.org/blog/about-connect/tab/community-guidelines/ -- guideline #3.

REPLY

Yes, agreed. GenAI can be very useful if you have some IT knowledge and know how to interrogate it (and force it to verify or refine its answers). The danger for many people is that because it mimics human speech, it can *feel* like talking to an intelligent being, even though it's just a hyperdimensional word-association model that's incapable of even the most basic symbolic reasoning (words have no meaning to it).

If genAI were a human, however, it would be this guy, with an answer for everything (whether it actually knew or not). 🙂

REPLY

Gotcha! I studied neural networks in the 90's in graduate school and it was kind of fun/hilarious to train backpropagation with wonky input data and see the results, it also illustrated that when trained correctly it produced great results.

Fast forward to today, and recalling that in 2022 I think Dr. Kwon indicated that of the 44,000 new cases of PCa, there were over 2,000 different treatment paths. One must be extremely careful to use artificial intelligence, but I certainly concur is a nice to place to start if you know what you are about to do next!

REPLY
Profile picture for edmond1971 @edmond1971

Gotcha! I studied neural networks in the 90's in graduate school and it was kind of fun/hilarious to train backpropagation with wonky input data and see the results, it also illustrated that when trained correctly it produced great results.

Fast forward to today, and recalling that in 2022 I think Dr. Kwon indicated that of the 44,000 new cases of PCa, there were over 2,000 different treatment paths. One must be extremely careful to use artificial intelligence, but I certainly concur is a nice to place to start if you know what you are about to do next!

Jump to this post

@edmond1971 Yes, agreed. Neural networks are interesting stuff, and to their credit, they do attempt to reproduce human reasoning (however imperfectly).

LLMs like ChatGPT are something else altogether: instead of trying to recreate thought, proponents realised we'd reached a tipping point where it is technologically and economically feasible simply to apply massive computing power to a game of "guess the next word." Rather than building a better fly swatter like traditional AI research was trying to do, just get some dynamite and blow up the room where the fly is buzzing around — problem solved! 😕

REPLY

As said by others, AI is a great tool but the quality of the data sets that are used and how the algorithm's manage those datasets affect the output. I use it for a variety of purposes, including medical, on a limited basis. I use it on the medical side to generate answers that I assume are questionable so that I can ask a doctor with an expertise in the direction of that answer. I never assume it is correct.

REPLY
Profile picture for northoftheborder @northoftheborder

Yes, agreed. GenAI can be very useful if you have some IT knowledge and know how to interrogate it (and force it to verify or refine its answers). The danger for many people is that because it mimics human speech, it can *feel* like talking to an intelligent being, even though it's just a hyperdimensional word-association model that's incapable of even the most basic symbolic reasoning (words have no meaning to it).

If genAI were a human, however, it would be this guy, with an answer for everything (whether it actually knew or not). 🙂

Jump to this post

@northoftheborder -- perfect! Now we just need an AI named Cliff.

REPLY

I had just started my career in software engineering when the first "AI" software was written in 1966: "Eliza", 420 lines of code & a modest English database. Its author has turned against AI: https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai

In tests, it beat modern AI products until very recently.

Consider the number of mistakes that modern AI makes on subjects I know something about, not to mention the obvious misinterpretations of many scanned articles. AI has a VERY long way to go to deserve the trust that it tries to demand.

My conclusion: The emperor's weavers are trying to sell us a bill of goods, as is apparent from the money-grubbing commercials that they have been running on TV: "We can't let the Chinese beat us! We must be first!"

Modern AI is somewhat useful, but the hype about it is fundamentally dishonest, & is being used to scam unknowledgeable investors. Even worse, it is deceiving the public into trusting it. Finally, it is annoying with its ham-fisted integration into other software products which you cannot turn off. Google's Gemini & Microsoft's Copilot are excellent examples of intrusive features that are difficult or impossible to disable.

Most of Google's search engine responses are AI, & many of them link to YouTube videos. As if egotistical YouTube videos are the authority on which to make decisions! I prefer written articles for authoritative information.

REPLY

It's only a matter of time before AI makes humans totally stupid, as they rely only on AI for an answer. It's already happening, people are graduating college and high school with very little knowledge because they've been having AI do their work for years. This is going to force employers, teachers and many others to do in-person quizzing to see if someone actually has any clue.

The reliance and trust of AI is staggering to me, as if it's a magic pill. Sort of like people who take GLP1 because they just don't want to bother to change their diet or do any exercise only to find out that it has a lot of really bad long term side effects - many of which aren't even studied yet. There is no magic pill.

REPLY
Profile picture for survivor5280 @survivor5280

It's only a matter of time before AI makes humans totally stupid, as they rely only on AI for an answer. It's already happening, people are graduating college and high school with very little knowledge because they've been having AI do their work for years. This is going to force employers, teachers and many others to do in-person quizzing to see if someone actually has any clue.

The reliance and trust of AI is staggering to me, as if it's a magic pill. Sort of like people who take GLP1 because they just don't want to bother to change their diet or do any exercise only to find out that it has a lot of really bad long term side effects - many of which aren't even studied yet. There is no magic pill.

Jump to this post

@survivor5280

|Sort of like people who take GLP1 because they just don't want to bother to change their diet or do any exercise only to find out that it has a lot of really bad long term side effects - many of which aren't even studied yet. |

I can already see the ads in 10 years "Were you or a loved one injured by Ozempic or another weight loss drug, our team of lawyers is standing by"

REPLY
Please sign in or register to post a reply.