103 Comments
User's avatar
keithdouglas's avatar

My late father was a pharmaceutical chemist. One of his last projects at the end of his career almost 25 years ago was on an anti-cancer drug. It seemed promising, etc. and I won't bore everyone with the details. However, his lesson to me (and to all of us) from this experience included this: "cancer is not one disease." He drew the further conclusion that a "cure for cancer" is going to be unlikely to found - cures or at least treatments for specific cancers, yes. But how will something with no knowledge of causation, no specifically chemical or physiological knowledge find anything in the haystack of ideas? Possible, just massively unlikely. I think he would have been appalled by this blind empiricism.

Martin Machacek's avatar

Moreover even if AI helped to produce plausible ideas faster (absolutely not guaranteed) the process allowing us to get from an idea to efficient and safe cure (be it a drug or procedure) won’t get any faster. So, really the most we can hope for at the moment is that LLMs relieve scientists of mundane work (like writing papers and grant applications) and have more time for actual scientific work.

Miriam Malthus's avatar

Writing papers is still actual scientific work though - when you do empirical research, writing up is part of the process of making sense of your results and how they relate to other people's work. An LLM can make a paper-shaped object but it can't make *your* paper because it isn't making sense of *your* research, it's just assembling words in the way it thinks looks right.

Martin Machacek's avatar

Scientists must for sure publish their results. They have to write papers, but LLMs may make it easier by producing readable English text based on terse fact heavy input from the scientist describing findings and results. This can especially help scientists whose first language is not English.

Jordan Vieler's avatar

There is no “cure” for cancer. It is a highly complex phenomena where any number of mutations coupled with bad luck cause cells in the body to go rogue. Once the process starts a complex relationship forms between the patients metabolic systems and the cancer. Furthermore, the cancer literally begins to evolve under pressures of therapeutics and the patients immune system.

Cancer treatment is like pest control for your house. Think of how the rats can learn over generations to evade traps and have evolved tolerance to poisons. Your father helped to create one tool in the arsenal.

Bron's avatar

The problem with the tech companies is that a number of them have gotten access to large databases of two dimensional images of conditions like cancer, eye related issues and on and on. It’s one thing to be successful with two dimensional issues and whole different thing with medications targeting misfolding of proteins in diseases. I’m a RN with a PhD in Alzheimer’s (survival analysis of patients on atypical antipsychotics,from UCSF). I specialize in clinical analytics of “complex needs” patients. I agree with other comments here that the tech companies are wildly misrepresenting their AI capabilities with complex disease trajectories. It’s the same ol’ fake it until you make it syndrome. I know… I used to work in tech until I got sick of the fake it until you make bs and became a RN.

Marc Atherton's avatar

Re cancer: agree wholeheartedly. Re techno-optimism - ai software is simple, the human body is orders of magnitude more complex. I keep hearing in my field that LLM chatbots are about to usher in a paradigm shift in the effectiveness, availability, and cost of mental health solutions. Dunning-Kruger anyone? Would be great - not planning on holding my breath. 🤷

JM Lapointe's avatar

As someone who has spent 25 years working in drug development, most of it in cancer drugs, I am extremely confident in saying that we will never "cure cancer". All the boosterism and the 'war on cancer' slogans are just that. The best we can hope for is incremental improvements in survival, each different dependent on the specific cancer type, which might possibly turn a limited number of cancers into long-term chronic diseases (as we've done for Chronic Lymphocytic Leukemia, for example), or extend survival of some cancers by a few months or years. AI has some limited uses as a tool for certain specific tasks in drug development, but it will never 'discover cures', and the claims of the tech oligarchs are just laughable for anyone who actually does research in the field.

And as someone who has lost friends to cancer, one thing that would probably increase survival more than any drug is organisational improvement of the health services for cancer diagnosis/therapy/followup processes. Budget cuts leading to delays in all these steps, and in the US the profit motive, are detrimental factors to cancer which no one wants to measure, but are probably much more lethal than any lack of drugs.

Robert Bruns's avatar

The pharmaceutical industry has been using AI-like methods since 1964 (run a search for "Free Wilson analysis"). Progress has been slow because the problems are inherently difficult. The most useful models are very specific ones, for instance a model to predict how quickly your proposed compound will be metabolized by cytochrome P450 3A4. Needless to say, these models are based on industry-specific chemical and biological data, not on verbiage that can be scraped off the internet.

Joy in HK fiFP's avatar

I have nothing to back up my suspicion, but it seems to me that curing cancer is probably the best sales pitch possible, therefore that is why they say it. If it wasn't, then they would pitch something else, and probably, at some point, will be. Stay tuned.

Thomas Schmid's avatar

Don't forget magic penis enlargement drugs /sarc

Joy in HK fiFP's avatar

Coming next, to a PR opportunity near you!

Rick Greenwald's avatar

Read The Emperor of All Maladies to get a pretty nuanced picture of cancer and it's causes. And in the postscript, the author explains how even the good cures are far from universal.

Vishweshwer Mangalapalli's avatar

Thanks for bringing it up. It made for a great and sobering read....

Shaun Bedingfield's avatar

Cancer is complex. Like someone said before, cancer is not one disease. At one end, you have failed cell mitosis and at another end you have numerous causes for that failure from age related degeneration to certain parasites.

We can and have stopped certain cancers. Some cancers are hard socially to kill. For example, we know that the worms in some raw fish can cause cancer. HPV is another known example.

Theoretically, finding cancer is anomaly detection at a massive scale which is why our body and things like allergies work much like ML. They also have some real limitations and fail. Our knowledge here is at its infancy. Creating a small model that is as effective say our white blood cells and survivable is nearly impossible with current tech. An already trained model might be more doable but still not easy and it would have to be trained on every person; we currently don't build tech like this.

I think our more likely course is just to slowly learn more with established mechanisms. AI in its current form might help sift data but is not a solution in itself.

Aaron Turner's avatar

LLMs are about as close to being able to cure cancer as my cat is - and he died in 2023.

William Bowles's avatar

The magic bullet? The massive increase in cancer is no doubt, due to our heavily polluted environment, with an estimated 40,000 novel chemicals, none tested on humans, flooding our air, our soil, our water, our bodies, our food and that's where the solution lies.

Amy A's avatar

The increase is primarily driven by survival. My father has a cancer that was once fatal; it is now incurable but highly treatable, and he has been appearing in statistics as a person with cancer for 30 years (rather than the 1-2 he would have clocked just a decade before his diagnosis).

He’s the plane that came back from the battle with scars, not the one that went down.

Thomas Schmid's avatar

Yeah, I am not buying that, because it implies that life was more healthy in last centuries. Especially the 19th century could be used as counter argument because a) life was really dangerous then and b) we have more or less good documentation about the environmental and living situation.

There is a nice book about that age "What the Victorians got wrong", and it states that the people at that time in general just accepted that their casualties were unavoidable in order to have progress. Care teams ? A cup of tea, and "there, there, off you go".

William Bowles's avatar

You're comparing Apples with Oranges, two different situations. In the 19th century, major threats to life came from infectious diseases due to unsanitary living conditions, poor diets, poor housing.

Thomas Schmid's avatar

Well at least they are both fruits...

"massive increase in cancer is no doubt, due to our heavily polluted environment" versus "major threats to life came from infectious diseases due to unsanitary living conditions, poor diets, poor housing": And where are you seeing the all important difference ?

And by the way, depending on your age, you should be able to remember the 60s and 70s, with their smog, smoke and people changing their car's oil in the neighboring woods or rivers. All these have been banned and consequently have vanished or at least heavily curtailed. So no, the environment and life are *generally* better than before, mostly because responsible people have forced lobbied governments to do the right thing.

Russ Abbott's avatar

Here's a follow-up on your "sweating the details" observation. It's a conversation with Gemini accessed from the page of an otherwise irrelevant YouTube video. (Hence the introductory sentences.)

------

Me: This is not about the video on this page. It's a completely different question. What weighs more: 20 pounds of bricks or 20 feathers?

Gemini: They both weigh exactly the same—20 pounds!

------

Me: How did you come to that conclusion?

Gemini: That conclusion is based on a simple measurement of mass rather than volume or density. A pound is a unit of weight. Therefore, 20 pounds of any material—whether it is heavy bricks or light feathers—will always weigh exactly the same: 20 pounds.

------

Me: But the question wasn't about 20 pounds of feathers vs 20 pounds of bricks. I was about 20 feathers vs 20 pounds of bricks.

Gemini: Ah, I see! That is a very different question.

20 pounds of bricks is significantly heavier than 20 feathers.

A single brick weighs roughly 5 pounds, so 20 pounds is several bricks. 20 feathers, on the other hand, are incredibly light and would weigh only a tiny fraction of an ounce.

------

Me: Why did you get it wrong at first?

Gemini: I initially misread your question, focusing on the common trick phrase "20 pounds of..." instead of the actual quantity you mentioned. I apologize for the misunderstanding! You are absolutely right that 20 individual feathers are essentially weightless compared to 20 pounds of bricks.

------

Me: But the question was clear: 20 pounds of bricks vs 20 feathers. How could you have made such a serious mistake in reading the question?

Gemini: I apologize for that error. You are absolutely right—I misread the quantity in your question, focusing on a common trick phrase instead of what you actually asked.

20 pounds of bricks is massively heavier than 20 individual feathers. I completely missed that distinction in the first attempt!

------

Me: I don't want an apology. I want to understand how you as an AI could have made such a foolish mistake in reading.

Gemini: That's a fair point, and it's a great example of how AI can sometimes miss the mark when interpreting complex or nuanced language patterns. Here's a breakdown of how that error occurred:

Pattern Recognition Overload: I am trained on massive amounts of data where the phrase "What weighs more: a pound of... or a pound of..." is an incredibly common riddle. My system often prioritizes matching inputs against these highly frequent patterns, which caused me to automatically fill in "pounds" for the second item, assuming the standard riddle structure was being used.

Lack of Real-World Logic: Unlike a human, I do not have a physical understanding of the world. I don't truly understand the mass difference between a brick and a feather; I only understand the mathematical definitions of pounds and units. Therefore, I didn't immediately identify the logical absurdity of the comparison on a physical level, focusing instead on the linguistic structure of the sentence.

Context Misinterpretation: I failed to properly parse the specific quantities and units you used (20 pounds vs 20 feathers) because my attention was skewed by the expected pattern, highlighting the difference between data processing and true comprehension.

Martin Machacek's avatar

The explanation of LLM limitations with respect to comprehension that Gemini produced is though really good. The sad part is that despite being able to describe its own limitations, Gemini cannot use this “knowledge” to improve its performance next time. It is still likely to make the same mistake once the conversation drops out of its context. LLMs can produce descriptions of meta-cognition despite actually having none and it confuses humans. We tend to assume that speech/text is representation of knowledge.

Thomas Schmid's avatar

"We tend to assume that speech/text is representation of knowledge": This! Just because it knows the approximate token sequence to schmoze its user, it doesn't *know* squat about just anything.

Russ Abbott's avatar

Look at the second half of my post (https://russabbott.substack.com/p/the-human-skills-you-need-to-avoid) on Daniel Pink's recent video. It discusses exactly the issues that @Martin Machacek and @Thomas Schmid discuss.

Saty Chary's avatar

Hi Gary, so true! It's an insult to the medical community when AI is propped up as the magic pill that's better than all their career-long expertise! It's gall plus stupidity combined. Same story in many other fields - environment, for example (and energy, materials etc etc). Somehow "AI" will magically offer up amazing solutions that have evaded humans, omg - except that it hasn't.

Same with the core of AI. OI is the AI that AI should have been, for decades. Also, Cognitive Science has had '4E cognition' since the 90s - AI people now make noises about embodiment as if it's their new discovery!

Arrogance and hubris is not better, compared to humility and collaboration with established fields and practitioners. AI wouldn't be the house of cards it is (where 'winters' are accepted, expected) if there is a better flow of cros-disciplinary ideas.

Spherical Phil's avatar

Gary, thank you for sharing something so deeply personal. Our own lives have also been profoundly impacted by this disease. The losses you describe are heavy, and your critique of Silicon Valley's 'technosolutionism' is entirely necessary today.

You and Emilia are pointing to the exact boundary where current AI architecture stalls: treating human biology as a closed, mechanistic puzzle rather than recognizing the human as a Complex Adaptive System (CAS).

When you mention that drugs fail because they only work on a 'subset of people,' that is a situational reality. A human being is not an isolated set of interacting molecules. We are dynamic, emergent systems. The diverse factors influencing both the onset of a disease and the trajectory of recovery extend far into an individual's broader context—factors our current models rarely map.

If AI is going to actually support individual humans and improve these outcomes, it requires more than training on the language of chemistry. It requires an architecture capable of Contextual Reasoning. We need a Situational Anchor to map the non-linear, unfolding variables of a specific human life, rather than treating a patient as a static data point.

We are currently trying to solve complex, emergent reality with reductionist tools. Thank you for framing the human stakes of this architectural gap so clearly

swizzlesticky's avatar

Gary, in addition to new drug development, we really need to reassess what the major factors are that cause cancer to develop and grow. AI isn´t designed to do that unless you push it, maybe. If you're interested, check out my blog, 10 years of reading studies but from a non-traditional perspective: synergiesforcancertreatments.blogspot.com. No gain in it for me, I'm not selling anything, but it's obvious we need fresh perspectives.

Steve Coy's avatar

FWIW, I do know of one AI company which has made significant progress in this area: Pattern Computer (https://www.patterncomputer.com/) Their approach is entirely original, different from any other AI approach I know of, and has nothing whatsoever to do with LLMs.

Full disclosure: I am an investor in the company, but I am an investor because I think they are onto something, not vice versa.

Paul Burchard's avatar

On it. With humility.