Nonattribution; nonreplicability; and, emphasis on what is marketed, especially by larger, entrenched groups will continue to pollute recognition for the original creators, innovators, inventors and risk takers.
The first mouse gets the guillotine: the second mouse gets the cheese.
Demis hassabis is really a quiet and responsible leader in our AI world he really can turn his approach based on situation. Not like Sam altman who are just hungry for power and money without any contribution to field.
"the first Nobel Prize for Neurosymbolic AI" — I hope it won't be the last. We need more great minds working on solving actual problems society faces instead of spewing out plastic art and poetry
Thanks Gary, in full agreement. That also tells us another couple of things:
- The Nobel committee has contrived the definition of traditional disciplines to accommodate new fields. Algorithms, while significant, do not fall neatly into Physics or Chemistry categories.
I would prefer instead to see new categories emerge, such as "Applied Mathematics," despite the existence of the Fields medal. 😎 and yes, it was my reaction too "What does have to do with Physics??".
- AI has now achieved Nobel recognition in both disciplines, mirroring the accomplishment of only Marie Curie. 😉 😋 (yes, this is tongue-in-cheek, we still have humans in the driver seat).
Gary: To an outsider reading your post, it seems like Hassabis is the main researcher here, with Jumper as a sidekick. But it's actually the other way around. Jumper and Baker are the trained chemists in this group, while Hassabis is not. John Jumper, in particular, has inspired countless grad students at UChicago and beyond, including Hassabis himself. Jumper developed an early version of AlphaFold for protein folding during his PhD, using a hybrid physics-ML model trained with contrastive divergence. (So, by the increasingly relaxed criteria of the Nobel committee, Jumper could've easily snagged a Nobel Prize in Physics. Maybe next year? 😊)
For insights into Jumper's college days, outside of speaking to him directly, the best source would be his Ph.D. advisor, Karl Freed.
In addition, Charles H. Martin, Ph.D., shared this on LinkedIn:
John Jumper's work in protein folding is what inspired me to start the WeightWatcher project back in 2015. I was discussing his work with our mutual Ph.D. advisor, Karl Freed, and it got me thinking... this could be applied to AI. Here’s a blog post from way back when that started it all:
Yeah great so Hinton did quite a bit for CS but what major contribution did he make TO PHYSICS, specifically? Let's turn every one of these awards into some kind of influencer award instead of recognizing advancements _to_ a particular field.
OK OK hear me out
The Turing Prize should start to be awarded to whoever manages to get Something Really Big Done FROM a computational device!
My impression is that the physicists of the Nobel comitee didn't want physics to be left out of the revolutionary discovery of artificial neural networks. They used the pretext that Hopfield network and Hinton's Boltzmann machine / network that are based on concepts from statistical physics. However, they couldn't award a Nobel Prize on neural networks to the physicist Hopfield alone then Hinton.
Yes! AlphaFold does appear (at least to this non-expert) to actually advance Chemistry. But I have seen nothing that I would call evidence that Hinton's work on the Boltzmann machine has contributed anything to understanding of the physical models that inspired it.
Interesting distinction you make between the deserving-ness of the Hinton vs DeepMind prizes - I wonder if that's because you know a little less about the antecedents to alphafold industry ...and the work they critically relied on? (Full disclosure - I am biased). This is not to take away from their amazing work in terms of engineering at that scale, accuracy compared to crystal structures. But .. the ability to predict shapes of proteins that are not sequence similar to solved crystals depended critically on insights from work done earlier that they reference poorly. Maybe all discoveries re like this .. though I think Victor Ambros and Gary Ruvkun ( Monday's Nobel) that is much less the case ..
I'm resonating with @Steven Ray Scott's comment (below) that the Physics Nobel was awarded to computer science/neural modeling work and discoveries -- completely non-adjacent fields, requiring different skills, research-focus, and methods.
In his Turing lecture, Hinton dismissed DeepMind (specifically its use of reinforcement learning) as reductio ad absurdum. And he left Google a few months after Hassabis got management responsibility for all of Google AI.
It's occurred to me that a lot of work with LLMs has a symbolic "feel." A simple prompt is just that, a simple prompt. But once you starting creating scripts of prompts, you're basically using symbolic means to direct the behavior of the LLM. And the hidden prompts of OpenAI's o1 technology are even more elaborate. It's as though they want to use a production system to control inference in the LLM. It's as though they're trying to smuggle symbolic computing in by the back door.
Good to see a balanced article that finds developments to praise.
IMO, there is too much focus on the downsides of LLMs, OAI and Altman -worth the occasional note to keep the pin poised by the hype bubble, but not the constant focus that it seems to attract. I suspect those who read this column are already very familiar and often in agreement with the arguments, and require little further persuasion on the shortfalls.
Would really like to see you surfacing more developments like those that Hassabis has pioneered, particularly those that may offer alternatives to the myopic optimism focused on LLMs.
It does not help balance in the field or highlight the fringe contributors when even the critics focus predominantly on Google/OAI/X etc
Hinton gets a Nobel Prize and Sam Altman gets a Nobel Diss:
“I'm particularly proud of the fact that one of my students fired Sam Altman” — Geoffrey Hinton (at a press conference immediately after his Nobel Prize announcement)
Nonattribution; nonreplicability; and, emphasis on what is marketed, especially by larger, entrenched groups will continue to pollute recognition for the original creators, innovators, inventors and risk takers.
The first mouse gets the guillotine: the second mouse gets the cheese.
Demis hassabis is really a quiet and responsible leader in our AI world he really can turn his approach based on situation. Not like Sam altman who are just hungry for power and money without any contribution to field.
"the first Nobel Prize for Neurosymbolic AI" — I hope it won't be the last. We need more great minds working on solving actual problems society faces instead of spewing out plastic art and poetry
Thanks Gary, in full agreement. That also tells us another couple of things:
- The Nobel committee has contrived the definition of traditional disciplines to accommodate new fields. Algorithms, while significant, do not fall neatly into Physics or Chemistry categories.
I would prefer instead to see new categories emerge, such as "Applied Mathematics," despite the existence of the Fields medal. 😎 and yes, it was my reaction too "What does have to do with Physics??".
- AI has now achieved Nobel recognition in both disciplines, mirroring the accomplishment of only Marie Curie. 😉 😋 (yes, this is tongue-in-cheek, we still have humans in the driver seat).
Gary: To an outsider reading your post, it seems like Hassabis is the main researcher here, with Jumper as a sidekick. But it's actually the other way around. Jumper and Baker are the trained chemists in this group, while Hassabis is not. John Jumper, in particular, has inspired countless grad students at UChicago and beyond, including Hassabis himself. Jumper developed an early version of AlphaFold for protein folding during his PhD, using a hybrid physics-ML model trained with contrastive divergence. (So, by the increasingly relaxed criteria of the Nobel committee, Jumper could've easily snagged a Nobel Prize in Physics. Maybe next year? 😊)
is there a good writeup about Jumper? would love a link
There are a couple of general links about John Jumper:
https://www.gairdner.org/winner/john-jumper
https://news.uchicago.edu/story/uchicago-alum-john-jumper-shares-nobel-prize-model-predicting-protein-structures
For insights into Jumper's college days, outside of speaking to him directly, the best source would be his Ph.D. advisor, Karl Freed.
In addition, Charles H. Martin, Ph.D., shared this on LinkedIn:
John Jumper's work in protein folding is what inspired me to start the WeightWatcher project back in 2015. I was discussing his work with our mutual Ph.D. advisor, Karl Freed, and it got me thinking... this could be applied to AI. Here’s a blog post from way back when that started it all:
https://calculatedcontent.com/2015/03/25/why-does-deep-learning-work/
Also, John Jumper’s PhD thesis:
https://knowledge.uchicago.edu/record/229?ln=en&v=pdf
In June, Quanta published a piece about the impact of AI on protein science. While not about Jumper specifically (also touches on Baker's work), his experience does get the most coverage: https://www.quantamagazine.org/how-ai-revolutionized-protein-science-but-didnt-end-it-20240626/
Yeah great so Hinton did quite a bit for CS but what major contribution did he make TO PHYSICS, specifically? Let's turn every one of these awards into some kind of influencer award instead of recognizing advancements _to_ a particular field.
OK OK hear me out
The Turing Prize should start to be awarded to whoever manages to get Something Really Big Done FROM a computational device!
My impression is that the physicists of the Nobel comitee didn't want physics to be left out of the revolutionary discovery of artificial neural networks. They used the pretext that Hopfield network and Hinton's Boltzmann machine / network that are based on concepts from statistical physics. However, they couldn't award a Nobel Prize on neural networks to the physicist Hopfield alone then Hinton.
Hinton should have gotten a Nobel in Literature :)
I think ChatGPT should have got the physics prize for “valiant effort” on the farmer boat/goat/cabbage/wolf river crossing problem.
Yes! AlphaFold does appear (at least to this non-expert) to actually advance Chemistry. But I have seen nothing that I would call evidence that Hinton's work on the Boltzmann machine has contributed anything to understanding of the physical models that inspired it.
Interesting distinction you make between the deserving-ness of the Hinton vs DeepMind prizes - I wonder if that's because you know a little less about the antecedents to alphafold industry ...and the work they critically relied on? (Full disclosure - I am biased). This is not to take away from their amazing work in terms of engineering at that scale, accuracy compared to crystal structures. But .. the ability to predict shapes of proteins that are not sequence similar to solved crystals depended critically on insights from work done earlier that they reference poorly. Maybe all discoveries re like this .. though I think Victor Ambros and Gary Ruvkun ( Monday's Nobel) that is much less the case ..
I'm resonating with @Steven Ray Scott's comment (below) that the Physics Nobel was awarded to computer science/neural modeling work and discoveries -- completely non-adjacent fields, requiring different skills, research-focus, and methods.
In his Turing lecture, Hinton dismissed DeepMind (specifically its use of reinforcement learning) as reductio ad absurdum. And he left Google a few months after Hassabis got management responsibility for all of Google AI.
It's occurred to me that a lot of work with LLMs has a symbolic "feel." A simple prompt is just that, a simple prompt. But once you starting creating scripts of prompts, you're basically using symbolic means to direct the behavior of the LLM. And the hidden prompts of OpenAI's o1 technology are even more elaborate. It's as though they want to use a production system to control inference in the LLM. It's as though they're trying to smuggle symbolic computing in by the back door.
Good to see a balanced article that finds developments to praise.
IMO, there is too much focus on the downsides of LLMs, OAI and Altman -worth the occasional note to keep the pin poised by the hype bubble, but not the constant focus that it seems to attract. I suspect those who read this column are already very familiar and often in agreement with the arguments, and require little further persuasion on the shortfalls.
Would really like to see you surfacing more developments like those that Hassabis has pioneered, particularly those that may offer alternatives to the myopic optimism focused on LLMs.
It does not help balance in the field or highlight the fringe contributors when even the critics focus predominantly on Google/OAI/X etc
Hinton gets a Nobel Prize and Sam Altman gets a Nobel Diss:
“I'm particularly proud of the fact that one of my students fired Sam Altman” — Geoffrey Hinton (at a press conference immediately after his Nobel Prize announcement)