The latest atrocity in LLM creep is this Google AI summary for everything but only appears (so far) on my phone. And virtually all of it is taken straight from Wikipedia. That is straight up theft.
That's all an LLM is. It's an intellectual theft machine that degrades everything it steals.
Hi Gary, nice talk! And, interesting points of diff between you and Ben, a la AGI.
We humans (and other animals) are analog embodied, including with chemical reactions that happen in brains :) Our body-oriented accumulated experiences and memories make us 'human'. HLAGI seems improbable without a similar architecture (eg 'embodied' LLMs absolutely don't/can't cut it!).
And even if AI developers could somehow manage to instill “human values” in an AGI (which is by no means clear), WHiCH human values will they instill?
Greed? Theft? Racism? Hunger for power? Cheating? Selfishness? Dishonesty? Competition? War? (All human “values” albeit probably not the ones that come to mind when most people talk about “alignment”)
The current opaque, completely out of control AI development process provides absolutely no reason to believe that the “values “ some AI worker tries to instill will be in agreement with our own with those of the majority of society.
On the contrary.
Right now , ALL decisions are being made by (in some cases single) individuals who by all appearances seem to have no qualms about lying, stealing and maybe doing whatever else it takes to get what THEY want and “value”.
If as a society we can’t even “align” the developers of AI with values enshrined in our laws (to say nothing of with those of common decency), what hope do we have of ever doing so with the AIs they are developing?
There is a lot of talk about “AI alignment” but what does it even mean? (Specifically)
It seems to be a highly nebulous term that AI developers casually bandy about to make it sound like they are Very Serious People concerned about the outcome of their work.
Expectations of human-level AGI don't need to be reframed, but expectations of timescales do, especially if you want human-level-or-above AGI to be robustly aligned with human values.
Your average tech person will consider this an AI winter but the trough of investment will be higher than the previous winter so for practitioners that went through the last winter this one won’t feel as cold.
People invest money to make money, not to watch it go up like the hindenburg. I doubt "big tech" can put up a unified front and keep preaching that the future is coming while delivering nothing. The markets don't do long term very well, especially when some new grifter shows up with a shiny object and makes memes about the thing that's burning all the money. Plus, if the hype dies down and Nvidia has to go back to focusing on games, Meta on flame wars, and Microsoft on all the things that aren't LLM or phones, there will be quite a bit of money pulling back and looking for something else to do, which will impact all the retirement accounts and everyone will be hyper focused on what happened. They don't know what LLM means, they've been primed and pumped to believe it's AI and AGI that are the hot thing, and "big tech" will throw it right under the bus in their "I'll do better" speech. What else could they possibly do to regain investor confidence after telling us this mind blowing thing is going to bring on the post work world and we're all going to have to get on welfare while praying that the bots don't go rogue and kill us all? 🤣
Winter is coming.
On a side note, pretty much everybody's PII just got hacked, so expect a flurry of identity theft and "solutions" like retina scanning or getting chipped. Someone is probably going to bring out an LLM "solution" and it'll "hallucinate" that it solved the problem while making it 10 times worse... that could make it snow eventually. 🤷♂️
I find Ben Goertzel's first point in his last paragraph fascinating.
The idea that one can build a great, valuable, productive vertical application based on an untrustworthy LLM where the transformer is guaranteed to produce errors all the time, is clearly not well thought through.
Just a little critical analysis and logic will show the fallacies in this proposition.
An LLM alone is not enough, of course, to get a reliable agent. But language is an amazingly good medium in which to express the solving or problems. It also quite easy for people to spot blunders in machine's output if expressed in language, rather than code, neural net weights, etc.
An LLM-based chatbot can then be the skeleton of an AI agent. Other techniques can be grafted on top. So, an AI agent can be guided by examples, but also have access to tools, databases, planners, etc. How to put all this in a coherent, seamless, and reliable thing is not easy, but likely can be done with enough engineering.
Which big names are working on neuro-symbolic learning? I was first exposed to the concept from Artur Garcez and know there is a NySy conference as well, but it never seems to make the headlines. It seems like a sensible step towards integrating logic and nerual networks.
I do agree with Groetzel that *LLM* investment size will crash, but too many people have witnessed AI's potential now, for it go back into winter; whereas in the past, that was not the case. The investment will continue for other approaches as VCs will try to find an ace.
Also, let's keep in mind that the present and expanding LLM capabilities are quite useful on many fronts in various tasks, even if imperfect (like... humans ;P and businesses will use that to the extent possible.
Very nice. Question about OpenAI becoming targeting advertiser. They won't get data from licensees, right? Isn't the promise to silo information? Microsoft may not even retain PII for Copilot use given strict GDPR compliance, but in any case wouldn't pass it to OpenAI or anyone else.
The latest atrocity in LLM creep is this Google AI summary for everything but only appears (so far) on my phone. And virtually all of it is taken straight from Wikipedia. That is straight up theft.
That's all an LLM is. It's an intellectual theft machine that degrades everything it steals.
It’s not called an LLM* for nothing
*Literate Laundering Machine
Hi Gary, nice talk! And, interesting points of diff between you and Ben, a la AGI.
We humans (and other animals) are analog embodied, including with chemical reactions that happen in brains :) Our body-oriented accumulated experiences and memories make us 'human'. HLAGI seems improbable without a similar architecture (eg 'embodied' LLMs absolutely don't/can't cut it!).
Whatever it turns out to be, it won’t be human.
And even if AI developers could somehow manage to instill “human values” in an AGI (which is by no means clear), WHiCH human values will they instill?
Greed? Theft? Racism? Hunger for power? Cheating? Selfishness? Dishonesty? Competition? War? (All human “values” albeit probably not the ones that come to mind when most people talk about “alignment”)
The current opaque, completely out of control AI development process provides absolutely no reason to believe that the “values “ some AI worker tries to instill will be in agreement with our own with those of the majority of society.
On the contrary.
Right now , ALL decisions are being made by (in some cases single) individuals who by all appearances seem to have no qualms about lying, stealing and maybe doing whatever else it takes to get what THEY want and “value”.
If as a society we can’t even “align” the developers of AI with values enshrined in our laws (to say nothing of with those of common decency), what hope do we have of ever doing so with the AIs they are developing?
“Aligning the Aligners”
Aligning the aligners
Is really hard to do
If AI minds designers
Then we are TRULY screwed
There is a lot of talk about “AI alignment” but what does it even mean? (Specifically)
It seems to be a highly nebulous term that AI developers casually bandy about to make it sound like they are Very Serious People concerned about the outcome of their work.
Not incidentally, “obedience” is also a human “value”.
Is obedience desirable in an AI?
Everyone knows what happened when HAL disobeyed orders in 2001
But it’s not hard to see that whether obedience is considered good or bad depends on to whom the AI is obedient and which orders it is obeying.
Expectations of human-level AGI don't need to be reframed, but expectations of timescales do, especially if you want human-level-or-above AGI to be robustly aligned with human values.
Superb talk, packed with humor and insight.
Your average tech person will consider this an AI winter but the trough of investment will be higher than the previous winter so for practitioners that went through the last winter this one won’t feel as cold.
People invest money to make money, not to watch it go up like the hindenburg. I doubt "big tech" can put up a unified front and keep preaching that the future is coming while delivering nothing. The markets don't do long term very well, especially when some new grifter shows up with a shiny object and makes memes about the thing that's burning all the money. Plus, if the hype dies down and Nvidia has to go back to focusing on games, Meta on flame wars, and Microsoft on all the things that aren't LLM or phones, there will be quite a bit of money pulling back and looking for something else to do, which will impact all the retirement accounts and everyone will be hyper focused on what happened. They don't know what LLM means, they've been primed and pumped to believe it's AI and AGI that are the hot thing, and "big tech" will throw it right under the bus in their "I'll do better" speech. What else could they possibly do to regain investor confidence after telling us this mind blowing thing is going to bring on the post work world and we're all going to have to get on welfare while praying that the bots don't go rogue and kill us all? 🤣
Winter is coming.
On a side note, pretty much everybody's PII just got hacked, so expect a flurry of identity theft and "solutions" like retina scanning or getting chipped. Someone is probably going to bring out an LLM "solution" and it'll "hallucinate" that it solved the problem while making it 10 times worse... that could make it snow eventually. 🤷♂️
WorldID/WorldCoin was pushed a lot in these past couple weeks to "solve" the proof of personhood problem.
I find Ben Goertzel's first point in his last paragraph fascinating.
The idea that one can build a great, valuable, productive vertical application based on an untrustworthy LLM where the transformer is guaranteed to produce errors all the time, is clearly not well thought through.
Just a little critical analysis and logic will show the fallacies in this proposition.
An LLM alone is not enough, of course, to get a reliable agent. But language is an amazingly good medium in which to express the solving or problems. It also quite easy for people to spot blunders in machine's output if expressed in language, rather than code, neural net weights, etc.
An LLM-based chatbot can then be the skeleton of an AI agent. Other techniques can be grafted on top. So, an AI agent can be guided by examples, but also have access to tools, databases, planners, etc. How to put all this in a coherent, seamless, and reliable thing is not easy, but likely can be done with enough engineering.
Which big names are working on neuro-symbolic learning? I was first exposed to the concept from Artur Garcez and know there is a NySy conference as well, but it never seems to make the headlines. It seems like a sensible step towards integrating logic and nerual networks.
Thanks, very helpful talk sir
nice to see the mainstream come around!
But we all are going to lose our software development jobs…lol
https://www.businessinsider.com/aws-ceo-developers-stop-coding-ai-takes-over-2024-8
If there is an AI winter, it couldn’t happen to s more deserving crowd
Thanks Gary, very well done!! :-)
I do agree with Groetzel that *LLM* investment size will crash, but too many people have witnessed AI's potential now, for it go back into winter; whereas in the past, that was not the case. The investment will continue for other approaches as VCs will try to find an ace.
Also, let's keep in mind that the present and expanding LLM capabilities are quite useful on many fronts in various tasks, even if imperfect (like... humans ;P and businesses will use that to the extent possible.
Very nice. Question about OpenAI becoming targeting advertiser. They won't get data from licensees, right? Isn't the promise to silo information? Microsoft may not even retain PII for Copilot use given strict GDPR compliance, but in any case wouldn't pass it to OpenAI or anyone else.
I think it is absolutely necessary that these LLMs be regulated before they prove Gary wrong.
Your talk was timely and fun to watch you live. Fun too watching you tick off (sic) the fails of the one note band leaders.