You amaze me Gary. I would agree on your conclusions regarding how LLMs are poisoning the web with disinformation and many LLMs won’t be able to determine human generated information vs AI generated. LLM digital exhaust will only get worse until we collectively reach disinformation saturation. I would also agree that LLM generated breaking news could cause wars to erupt as LLM errors cause news chaos.
It is quite likely the tech is vastly overhyped, at least short term. Longer-term I could not agree less.
Here's my counter-prediction about "Generative AI, once Silicon Valley’s golden child, will start to look like a fad":
Neural nets, of one flavor or another, in addition to applications in self-driving cars, chatbots, and image generation, will be used for creating high-quality physics-aware 3D movies and for controlling robots.
The architecture will evolve, more ideas will be added, and will become a lot more powerful and more reliable. This will take more than 1 year though, likely 3 or 5.
It may take a crash to get there, especially if OpenAI keeps on being so reckless.
Huh, 3D image generation is the killer app for gen AI? I did not know that. It actually appears to me that 3D scene understanding is not really LLM’s or diffusion model’s forte. Is there any new architecture supporting those use cases under development? In any case, even if all of that became true in 3 - 5 years, it is hardly going to be the 10x (or more) economy booster that current level of investment anticipates.
No, 3D image generation is not the killer app. It is one of the things near-term AI will be able to do, but need a better architecture than diffusion models. For 2D images must start with honest geometric relationships between objects, like a skeleton, then paint on top. For 3D likely need to start with 3D meshes, then refine the texture and fine-level details. Likely a pass of geometric verification is needed in either case. As before, Fei-Fei Li and likely others are working on this.
The current investment is surely way too much, as the industry will mature more slowly and the profits will take longer to arrive than what some companies anticipate.
The killer application is work automation. Helping people do stuff faster and more efficiently. Even saving workers a bit of time is worth paying a monthly subscription for.
Another near-term application is robotics. Common sense in robots can also benefit from a vast amount of data that is distilled into behaviors. There is a lot of variability out there, but the overall number of patterns for what a robot is good for is not a huge lot more than what we use with chatbots. Some failure and hallucination is likely acceptable, unless of course people get hurt.
Did you just mention LLM's and common sense in the same text? How in whoever's name do those fit together?
Also, to say that "some failure and hallucination is likely acceptable" will not fly ... it starts with 'how much is _some_ failure', what is '_likely acceptable_' and that's not even asking about the use cases you envision for '_some failure and hallucinations_' being acceptable
"Common sense" is an idealization. Even we people don't possess a good one often.
We need machines that will often do useful stuff and sometimes screw it up. We will have to improve on the former, and minimize the effect of the latter.
It is fine for a robot to misplace a sock. That can be fixed. A dead owner would be bad. That's your example.
>"Common sense" is an idealization. Even we people don't possess a good one often.
Compared to "none" as available in the LLMs and very much not likely to be aver achievable, at least "bad" or "so so" common sense of people is much preferable.
This is fair enough. The question is how to instill into machines said common sense?
Approaches based on lots of rules and if-else statements failed. Now we are doing it by imitation and by ensuring AI verifies what it does, if it can. It works a lot better for AI chatbots than anything we had before, and likely schemes similar in the spirit will work for robots too.
"The ideal subject of totalitarian rule is not the convinced Nazi or the dedicated communist, but people for whom the distinction between fact and fiction, true and false, no longer exists."
Hard to see how I could lose any more trust in what I read or see. Sadly we are at a state where at least 50% of online discourse related to politics, economics, war, medicine, etc. is just straight up astroturf and bullshit, but it’s not always clear which 50%, so I take all news provisionally now unless I can independently confirm it through multiple sources or personal knowledge — and even then I remain skeptical and looking for holes in the story that would indicate astroturf and bullshit.
They are running the movie SULLY here and there this week, close to January 15--it reveals very clearly the great differences between two elements of modeling--one drawn from the methods of the natural and physical sciences and fields, and the other from the methods needed for success in the human sciences and fields. (It also accounts for the "immorality" of AI.) (This is a repost/edited from late in the last Gary blogpost.)
Sully is the airline pilot who, in 2009, landed his plane with 155 people on board in the Hudson River after undergoing a "bird strike" and losing both engines. The movie illustrates the point:
Specialization (and differentiation/isolation of formal fields) is how human beings have treated creative movements in complexity, which leaves the weaknesses and/or strengths in analyses at (1) silo-thinking, and an analyst's GENERAL educational and cultural background developed before becoming a specialist, and/or during; (2) the capacity of institutions to formalize regular cross-field fertilization/correlates, just (at least) to keep people aware that their own silo is not the entire world); and (3) the regard and presence (or not) of a sound philosophical basis that fosters threads of unity across historical movements--including political and ethical norms.
Also, where (3) is concerned, the "model" for anything to do with human beings is wrongly centered on the expectation of predictability and extra-site control, as for the natural and physical sciences, and needs to be centered on a reflectively defined normativity, still generalized, but based on rationality/reasonability and reflective thinking, and on an agency of consciousness that can pull things together rightly to meet the vagaries of making choices in moment-to-moment history with an openness and exactitude that, at its apex, is anathema to exact prediction.
That's what Sully the airline pilot did. Morality turns out to be a most practical concern.
In other words, what needs to be done **in this moment and place** as distinctly different from all other combinations of moments and places in history, and from predetermined ideas. In the movie, Sully, the issue was TIMING aka being skilled but also tuned in to the constant movement of surrounding historical events AS THEY OCCUR and as they import on everyone.
If so, and if you understand what I am saying above, then, how important is it to understand AI in the light of these higher-level human realities? or does it or can it transcend them in some way? Or is the problem one of a deep flaw (and set of absences) at the basis of our thinking about elements that impinge on our idea of models?
Be more familiar with problems than with solutions. Tools and solutions lock you in, shaping how you see the world and which parts of the problem you see. Know the problem first before seeking for a solution.
Trust and accountability are trends we could all get behind in 2026.
Hah!
A girl can dream! 💭
You amaze me Gary. I would agree on your conclusions regarding how LLMs are poisoning the web with disinformation and many LLMs won’t be able to determine human generated information vs AI generated. LLM digital exhaust will only get worse until we collectively reach disinformation saturation. I would also agree that LLM generated breaking news could cause wars to erupt as LLM errors cause news chaos.
Somehow I missed both of these. Thanks for sharing, Gary.
I could see Trump and MAGA thinking that they have misinformation covered and don't need help from any stinking AI. They're not wrong.
I just deleted my Pinterest account because I got fed up with the fake - stoves floating in the air, faucets draining into nothing…
If only the world had just 100 folks like you.
Fix the misbeLIEf, predictably irrational and GenAI con
Has trump ever said the word "ye" except perhaps in relation to his Uncle Tomye?
I took that as a deliberate reminder of how unlikely it would be 🤷♀️
It is quite likely the tech is vastly overhyped, at least short term. Longer-term I could not agree less.
Here's my counter-prediction about "Generative AI, once Silicon Valley’s golden child, will start to look like a fad":
Neural nets, of one flavor or another, in addition to applications in self-driving cars, chatbots, and image generation, will be used for creating high-quality physics-aware 3D movies and for controlling robots.
The architecture will evolve, more ideas will be added, and will become a lot more powerful and more reliable. This will take more than 1 year though, likely 3 or 5.
It may take a crash to get there, especially if OpenAI keeps on being so reckless.
To add, the current video generation architecture is not good. Likely something along the lines Fei-Fei Li is trying may work out.
Huh, 3D image generation is the killer app for gen AI? I did not know that. It actually appears to me that 3D scene understanding is not really LLM’s or diffusion model’s forte. Is there any new architecture supporting those use cases under development? In any case, even if all of that became true in 3 - 5 years, it is hardly going to be the 10x (or more) economy booster that current level of investment anticipates.
No, 3D image generation is not the killer app. It is one of the things near-term AI will be able to do, but need a better architecture than diffusion models. For 2D images must start with honest geometric relationships between objects, like a skeleton, then paint on top. For 3D likely need to start with 3D meshes, then refine the texture and fine-level details. Likely a pass of geometric verification is needed in either case. As before, Fei-Fei Li and likely others are working on this.
The current investment is surely way too much, as the industry will mature more slowly and the profits will take longer to arrive than what some companies anticipate.
The killer application is work automation. Helping people do stuff faster and more efficiently. Even saving workers a bit of time is worth paying a monthly subscription for.
Another near-term application is robotics. Common sense in robots can also benefit from a vast amount of data that is distilled into behaviors. There is a lot of variability out there, but the overall number of patterns for what a robot is good for is not a huge lot more than what we use with chatbots. Some failure and hallucination is likely acceptable, unless of course people get hurt.
Did you just mention LLM's and common sense in the same text? How in whoever's name do those fit together?
Also, to say that "some failure and hallucination is likely acceptable" will not fly ... it starts with 'how much is _some_ failure', what is '_likely acceptable_' and that's not even asking about the use cases you envision for '_some failure and hallucinations_' being acceptable
"Common sense" is an idealization. Even we people don't possess a good one often.
We need machines that will often do useful stuff and sometimes screw it up. We will have to improve on the former, and minimize the effect of the latter.
It is fine for a robot to misplace a sock. That can be fixed. A dead owner would be bad. That's your example.
>"Common sense" is an idealization. Even we people don't possess a good one often.
Compared to "none" as available in the LLMs and very much not likely to be aver achievable, at least "bad" or "so so" common sense of people is much preferable.
This is fair enough. The question is how to instill into machines said common sense?
Approaches based on lots of rules and if-else statements failed. Now we are doing it by imitation and by ensuring AI verifies what it does, if it can. It works a lot better for AI chatbots than anything we had before, and likely schemes similar in the spirit will work for robots too.
"The ideal subject of totalitarian rule is not the convinced Nazi or the dedicated communist, but people for whom the distinction between fact and fiction, true and false, no longer exists."
Hannah Arendt, 1951
No one believing in anything any longer, is a warmonger’s ultimate goal.
Because no one will believe the sane voices either.
A good example, since nobody in their right mind trusts anything they read in The Atlantic - whether it is generated by AI or a quasi human.
“and when conflicts are started and escalated by false pretexts.” We didn’t, don’t and will not need AI for that.
No, it will be the business of war, as usual.
Whatever happened to project Stargate? You don't hear anything about it anymore. Still going, stalled?
It's still very much under construction. There have been quite a few photos posted about the construction site on various websites.
Hard to see how I could lose any more trust in what I read or see. Sadly we are at a state where at least 50% of online discourse related to politics, economics, war, medicine, etc. is just straight up astroturf and bullshit, but it’s not always clear which 50%, so I take all news provisionally now unless I can independently confirm it through multiple sources or personal knowledge — and even then I remain skeptical and looking for holes in the story that would indicate astroturf and bullshit.
They are running the movie SULLY here and there this week, close to January 15--it reveals very clearly the great differences between two elements of modeling--one drawn from the methods of the natural and physical sciences and fields, and the other from the methods needed for success in the human sciences and fields. (It also accounts for the "immorality" of AI.) (This is a repost/edited from late in the last Gary blogpost.)
Sully is the airline pilot who, in 2009, landed his plane with 155 people on board in the Hudson River after undergoing a "bird strike" and losing both engines. The movie illustrates the point:
Specialization (and differentiation/isolation of formal fields) is how human beings have treated creative movements in complexity, which leaves the weaknesses and/or strengths in analyses at (1) silo-thinking, and an analyst's GENERAL educational and cultural background developed before becoming a specialist, and/or during; (2) the capacity of institutions to formalize regular cross-field fertilization/correlates, just (at least) to keep people aware that their own silo is not the entire world); and (3) the regard and presence (or not) of a sound philosophical basis that fosters threads of unity across historical movements--including political and ethical norms.
Also, where (3) is concerned, the "model" for anything to do with human beings is wrongly centered on the expectation of predictability and extra-site control, as for the natural and physical sciences, and needs to be centered on a reflectively defined normativity, still generalized, but based on rationality/reasonability and reflective thinking, and on an agency of consciousness that can pull things together rightly to meet the vagaries of making choices in moment-to-moment history with an openness and exactitude that, at its apex, is anathema to exact prediction.
That's what Sully the airline pilot did. Morality turns out to be a most practical concern.
In other words, what needs to be done **in this moment and place** as distinctly different from all other combinations of moments and places in history, and from predetermined ideas. In the movie, Sully, the issue was TIMING aka being skilled but also tuned in to the constant movement of surrounding historical events AS THEY OCCUR and as they import on everyone.
If so, and if you understand what I am saying above, then, how important is it to understand AI in the light of these higher-level human realities? or does it or can it transcend them in some way? Or is the problem one of a deep flaw (and set of absences) at the basis of our thinking about elements that impinge on our idea of models?
Be more familiar with problems than with solutions. Tools and solutions lock you in, shaping how you see the world and which parts of the problem you see. Know the problem first before seeking for a solution.