F Cancer
The real test of AI
Just in the last few years alone, cancer has taken my mother, one of my closest friends from childhood, my closest friend in my adopted hometown (she was only 51), and a beloved aunt who inspired and supported me throughout my life1, as well as several colleagues, including the AI pioneer Doug Lenat. And of course my own experience is far from unique; virtually everyone I know has lost loved ones to cancer.
AI is supposed to change all that. But so far it hasn’t.
A new essay by Emilia Javorsky, a physician scientist at the Future of Life Institute, offers a sobering statistic:
I don’t agree with everything in the new essay, but I agree with a lot of it. The part that I agree with most is its strong objection to technosolutionism, to the kind of fantasy that if we just had the right algorithm, cancer would be magically and immediately solved. (Dario Amodei’s ridiculous claims about doubling life span in the next decade are deluded on this front.)
Javorsky writes, for example, that
“Silicon Valley has repeatedly stormed into healthcare with the hubris of outsiders attempting to reinvent a system they do not fully understand, failing to learn from past mistakes. The repeated pattern suggests not mere execution challenges but systemic misunderstanding of what’s actually limiting medical progress.”
As Javorsky argues. sometimes failures in the drug development market have more to do with market forces than science. For example, Javorsky notes that “New antibiotics are inherently unprofitable, as they must be used sparingly to avoid promoting resistance, limiting revenue potential.” That’s not a science problem, it’s an incentives problem.
Or a disease may be too rare, or drug too expensive to produce. Or a looming patent expiration deadline might lead a company to abandon costly studies of a drug that actually works, simply because the economics aren’t promising enough. Javorsky gives the example of Tanespimycin, which was “difficult to produce [with] only limited time remaining before the drug’s patent expires [making] further investment in the drug difficult to justify financially." Even if we had perfect AI, it would only be part of the battle.
§
Javorsky is also quite right that there is – and always has been – a great deal of naivete:
Javorsky cautions against oversimplifying biology:
As a friend of mine likes to say, “that was never going to work.”
The reality is that biology is really really hard. Our bodies are made of trillions of interacting molecules, and each body is different. Many drugs that work for some people (probably) fail clinical trials because they work on some subset of people but we don’t understand biology well enough to know which is which.
§
The one thing I think Javorsky gets somewhat wrong is that she downplays just how important getting the right algorithm is, and how far we are from that point.
At the risk of redundancy, I will repeat a quote I shared recently from Eli Lilly CEO David Ricks in a different context that is very much on topic:
[AI is far from curing cancer and most other diseases.] If you just ask them to solve biology or chemistry questions, they’re not particularly good at it… They’re trained on the human language, not on the language of chemistry, physics, and biology.”
Over and over we discover drugs with AI that work on trials in animals but not safely enough in humans; we just don’t understand biology well enough. (This isn’t a problem specific to AI. According to one analysis, 92% of all drug trials that work with animals have toxicity problems in humans.)
Actually beating cancer is going to require systemic changes, both technical and economic, in how we develop and test drugs, as Javorsky convincingly argues, but also deeper forms of artificial intelligence that are better able to reason about chemistry, physics and biology.2 AI will also need to sweat the details a whole lot better.
The sooner we get there the better.
This essay is written in memory of my mother, who would have been 84 last week, my memory of my father, taken by cancer over decade ago, and all the other friends and colleagues and relatives that I have lost, and in memory of all those that we all have lost.
As it happens, I honored Aunt Esther in one of the most important passages in my books, featuring her in a hypothetical that anticipated the problem of hallucinations in 2001, in The Algebraic Mind.
Of course AI can have a positive impact on medical research long before it reaches that level of sophistication, simply by streamlining everyday tasks, helping with subject recruitment and paperwork, and so on; such productivity gains seem plausible in the short-term, even if the more dramatic magic some people are hoping for doesn’t. In the near-term, the greatest contributions from AI towards combating disease will probably come from enhancing humans with better tools rather than radical new scientific discovery.





My late father was a pharmaceutical chemist. One of his last projects at the end of his career almost 25 years ago was on an anti-cancer drug. It seemed promising, etc. and I won't bore everyone with the details. However, his lesson to me (and to all of us) from this experience included this: "cancer is not one disease." He drew the further conclusion that a "cure for cancer" is going to be unlikely to found - cures or at least treatments for specific cancers, yes. But how will something with no knowledge of causation, no specifically chemical or physiological knowledge find anything in the haystack of ideas? Possible, just massively unlikely. I think he would have been appalled by this blind empiricism.
The problem with the tech companies is that a number of them have gotten access to large databases of two dimensional images of conditions like cancer, eye related issues and on and on. It’s one thing to be successful with two dimensional issues and whole different thing with medications targeting misfolding of proteins in diseases. I’m a RN with a PhD in Alzheimer’s (survival analysis of patients on atypical antipsychotics,from UCSF). I specialize in clinical analytics of “complex needs” patients. I agree with other comments here that the tech companies are wildly misrepresenting their AI capabilities with complex disease trajectories. It’s the same ol’ fake it until you make it syndrome. I know… I used to work in tech until I got sick of the fake it until you make bs and became a RN.