Discussion about this post

User's avatar
Sergei Polevikov's avatar

Using Generative AI for medical diagnostics is dangerous and irresponsible. The AI companies should have a visible disclaimer everywhere. But because they don't, even the so-called "AI experts" are being confused.

Case in point. As I mentioned in my review of the November 29, 2023, congressional hearing “Understanding How AI is Changing Health Care,” there was a 'covfefe' that everyone seemed to have missed: https://sergeiai.substack.com/p/what-if-a-physician-doesnt-use-ai.

Rep. Gus Bilirakis:

“Mr. Shen, can you tell us about the role of generative AI, what it is, and what its potential can be within the health care sector?”

Peter Shen, Head of Digital Health – North America, Siemens Healthineers:

“With generative AI here, we see the greatest potential in the ability for the AI to consume information about the patient themselves. So, when a patient goes to get an exam for a diagnosis, leveraging generative AI can help identify precisely what diagnosis should be looked for. Another area where generative AI benefits medical imaging is in interpreting the images themselves. It can translate complicated medical language into layman’s terms for the patient, helping them better understand the test results from their exam.”

Wrong! We don't use hallucinating AI for precision medicine. Shame on you, Mr. Shen.

If the experts of AI make such egregiously erroneous statements, what can you expect from the users of AI?

Yes, as per Sam Altman, AI can be magical, but in healthcare, we need more than magic. We need precision, accuracy, and reliability. The thought of using generative AI in medical diagnostics is as absurd as using a Magic 8-Ball for brain surgery. It’s not just irresponsible. It’s a gamble with human lives.

Expand full comment
Eric Cort Platt's avatar

The statistical nature of these machines is revealed I've noticed, when trying out ChatGPT for low-level editing of text: it tends to wander away from the task the longer it's allowed to generate answers. It has no real internal coherence. I had to keep telling it over again what it's job was exactly.

This lack of true internal coherence was dramatically revealed the other day when it went bonkers for 6 hours. 😆

Expand full comment
30 more comments...

No posts