Oh no …. You know you had a feeling it would get bad … but you hoped positively it surely would not get so so bad … then you actually witness real situations appearing not just in one place but everywhere and you go … oh s@@@ the GenAI genie 🧞♂️ is very much outside of the bottle and we can’t find a bottle or a way to get it back in by rubbing or saying a special word three times 😬😱
I've had it with sociopaths (like Altman) who are proven liars, don't believe in democracy and collect billions from clueless investors who have too much money to throw away on a bet they then release on the public knowing the harm it will cause. I've had it with these jerks not being held accountable.
It's garbage in, garbage out. When you hoover up all available data on the internet, a huge percentage of it is garbage or worse. Why do that? Why not create smaller AI tools that are trained exclusively on proven data in a specific field to assist with a specific task (i.e., adapting to climate change by helping farmers become more water efficient and combat pests)? This wouldn't require huge amounts of potable water, electricity, or compute power as does pursuing AGI.
Why are they focused exclusively on developing general replacements for human beings that will mean depleting our resources, ruining communities, and putting people out of work? I see no good answer to that question other than their arrogant quest to rule the world at everyone else's expense.
I don't believe the pipe dreams they try to sell to the public and neither do they. Zuckerberg has already done that; he has blood on his hands (i.e., Myanmar) and helped usher in our current dystopian state of anger and hate. He promised to connect the world, but has instead divided it into warring camps fueled by lies and propaganda.
I'm with Karen Hao, especially the alternate community based approaches discussed in her final chapter of "Empire of AI."
Great article. I work in a fin tech obsessed with AI, it's all upside to the management. I am saddened by their optimism.
A story from Ireland"
"
University of Limerick to investigate how AI text was part of book written by senior academic
soon after its release last March, an American academic John Mark Ockerbloom, based at the University of Pennsylvania, discovered a passage in the book which — while discussing the advantages of cancer vaccines over chemotherapy — advised the reader they were dealing with an AI response, and should ideally seek advice from a human.
“Cancer vaccines and chemotherapy are two different approaches to treating cancer, and their effectiveness can vary depending on the type and stage of cancer, as well as individual patient factors,” the passage began.
However, it then added: “It is important to note that as an AI language model, I can provide a general perspective, but you should consult with medical professionals for personalized [sic] advice.”
The book’s author, Dr Nanasaheb Thorat, has been an associate professor at the University of Limerick’s Department of Physics since 2022. He had not formally responded for a request for comment at time of going to press"
What new world? Tik Tok is already a raging cess pool of Jew hatred, racial hatred and misinformation which get millions of views, all without LLMs.
Even the precious few media sites that still retain a few shreds of journalistic integrity and have not yet gone full on Buzzfeed or worse 4chan, are filled with misinformation. Spending a few minutes on e.g. Media Matters, one of the sites you quote, I see it is chock full of misinformation generated by humans. When you live off of eyeballs, substitute political tribalism for principles and values, and have instantaneous deadlines, misinformation is a necessary outcome.
Sure LLMs makes it a bit easier to generate visual slop for the internet, crappy legal briefs, dumb downed articles & videos and word-garbage phds, but that is hardly a cause for alarm. These already exist in massive amounts without LLMs. The silver lining is that LLM generated stuff is so shitty, perhaps people will take it less seriously than the raging partisan headlines of more strait-laced media.
In any case, there are far more scary things about LLMs. The most critical of these is how dumb bosses are already using it as a terrible excuse to fire human workers, which will inevitably lead to all kinds of disasters, personal and societal, in the real world.
I have seen AI-created summaries of emails that displayed a complete misunderstanding of what was said. So, instead of reducing my workload, these 'summaries' increase it: first I get to read the summary and then the actual text to check if the summary is accurate or not. Some call this kind of thing 'progress'.
"If you want a picture of the future, imagine Big, Beautiful Billshit stamping on a human face—forever." We cannot ignore the intersection of Botshit and Billshit.
Stories like these bode well for the well-trained mind. As perhaps many people come to rely on a technology that gives them wrong answers to the world around them, the value of a reasoning, educated, disciplined mind will soar in value. The bright side is that success will become easier for many.
That the information regurgitated by mechanized parrots is entirely unreliable is not surprising. Humans have been swimming in erroneous information as long as we have used language, and even though other animals are quite capable of utilizing deception, humans seem to have a particular knack for manufacturing falsehoods. Much of the raw material the mechanical parrots are fed is simply crap, so of course we can expect the output to regularly consist of effluent. A reasonable analogy would be to gather water for drinking from the sewer before it is filtered and treated. What the TechBros and their friends the FinanceBros are grafting onto our society is merely a slickly marketed Crap Regurgitation Device [CRD].
Of course the CRDs cannot recognize patent falsehoods. The basic design elements of CRDs preclude this. A CRD strings words together based on the most simplistic formula imaginable- which word or group of words most likely follows from the previous word or group of words. Even with myriad ad hoc prosthetics, sifting and sorting routines, various attenuators, the CRD remains as likely as not to produce crap because it cannot apply judgement, cannot engage critical reflection, cannot utilize basic logic. CRDs, no matter how many server columns are plugged into leaky nuclear reactors (as has been recently proposed here in my home state of Pennsylvania, like nothing could possibly go wrong with that), cannot manifest even the most rudimentary cognition and will never, ever will. CRDs will never, ever *simulate*, cognition. But there is no shortage of ignorant dupes who wish to see indications of cognitive functions where none are present. Mind you, humans have ascribed human qualities (and in many instances the residue of The Divine) to inanimate objects, especially our own artifacts, since our ancestors began scraping animal bones and chipping flint. This is nothing new, and we are, as a species, endlessly duped by it.
The problem is that neither the CRDs nor a large subset of humans are capable of recognizing crap being sprayed at their eyes.
Worse still, a very large cohort is *utterly indifferent to the dissemination of nonsensical splutterings*, and celebrates the most grotesque bigotry. They are not bothered by the falsehoods and the bigotry. They've shown us they will enthusiastically vote in favor of falsehoods and bigotry, repeatedly, if given the chance.
The only genuinely new feature of CRDs is the fabrication of a high-pressure valve connecting them to a high-speed, global Crap Distribution Network[CDN].
And we have all, passively, acquiesced to carrying the spigots of the CDN with us at all times.
This whole Altman et al. vs Marcus clash wonderfully recapitulates the debate in the 70s between the "canny" optimistic structuralists who, using their structuralist models, of which LLMs are an embodiment, "are convinced that systematic knowledge is possible [through the application of structuralist models of language]", and the "uncanny", careful, pessimistic post-structuralists, who "claim to know only the impossibility of this knowledge" and repeatedly reveal that the "thread of logic [applied to these structuralist models] leads ... into regions which are alogical, absurd".
This isn't our first structuralist rodeo.
*Quotes are from "On Deconstruction" by Jonathan Culler, 1982, p22-23 of paperback 25th anniversary edition. Culler is in part quoting himself from Miller (1976).
Little doubt that the modus operandi of LLM-based generators is not going to make AGI possible; hallucinations are only part of it. But there is by now quite a bit evidence that they can be educated to do truly marvelous, and __useful__ work. See the report at Scientific American, https://bit.ly/45HxVWK, where actually Open AI o4-mini impressed mathematicians worth their salt. There is also the bot Google developed (out of Gemini) which has proved a worthwhile co-researcher in a variety of life sciences experiments.
Now let us look somewhere else for AGI... I would not be surprised if it NEVER comes, yet LLM-based bots lead to real progress.
That's an interesting report, however that's all it is. Unless they publish these results so independent researchers can verify the claims, is essentially just hype.
Does this mean that "botshit crazy" is now a thing? :-)
I’m having that!
Expected outcome when you have bots in the belfry.
The ai influencers want us to focus on the dangers of super-intelligent ai ... meanwhile, stupid ai is killing us ...
Oh no …. You know you had a feeling it would get bad … but you hoped positively it surely would not get so so bad … then you actually witness real situations appearing not just in one place but everywhere and you go … oh s@@@ the GenAI genie 🧞♂️ is very much outside of the bottle and we can’t find a bottle or a way to get it back in by rubbing or saying a special word three times 😬😱
ain’t going back in the bottle :(
I've had it with sociopaths (like Altman) who are proven liars, don't believe in democracy and collect billions from clueless investors who have too much money to throw away on a bet they then release on the public knowing the harm it will cause. I've had it with these jerks not being held accountable.
It's garbage in, garbage out. When you hoover up all available data on the internet, a huge percentage of it is garbage or worse. Why do that? Why not create smaller AI tools that are trained exclusively on proven data in a specific field to assist with a specific task (i.e., adapting to climate change by helping farmers become more water efficient and combat pests)? This wouldn't require huge amounts of potable water, electricity, or compute power as does pursuing AGI.
Why are they focused exclusively on developing general replacements for human beings that will mean depleting our resources, ruining communities, and putting people out of work? I see no good answer to that question other than their arrogant quest to rule the world at everyone else's expense.
I don't believe the pipe dreams they try to sell to the public and neither do they. Zuckerberg has already done that; he has blood on his hands (i.e., Myanmar) and helped usher in our current dystopian state of anger and hate. He promised to connect the world, but has instead divided it into warring camps fueled by lies and propaganda.
I'm with Karen Hao, especially the alternate community based approaches discussed in her final chapter of "Empire of AI."
Unfortunately, we haven't "had it" ... a lot more of the same and even worse is coming.
You're singing my song.
Guess we need to find a mop
😅😬
We need a bigger bottle … send for a tanker
Great article. I work in a fin tech obsessed with AI, it's all upside to the management. I am saddened by their optimism.
A story from Ireland"
"
University of Limerick to investigate how AI text was part of book written by senior academic
soon after its release last March, an American academic John Mark Ockerbloom, based at the University of Pennsylvania, discovered a passage in the book which — while discussing the advantages of cancer vaccines over chemotherapy — advised the reader they were dealing with an AI response, and should ideally seek advice from a human.
“Cancer vaccines and chemotherapy are two different approaches to treating cancer, and their effectiveness can vary depending on the type and stage of cancer, as well as individual patient factors,” the passage began.
However, it then added: “It is important to note that as an AI language model, I can provide a general perspective, but you should consult with medical professionals for personalized [sic] advice.”
The book’s author, Dr Nanasaheb Thorat, has been an associate professor at the University of Limerick’s Department of Physics since 2022. He had not formally responded for a request for comment at time of going to press"
What new world? Tik Tok is already a raging cess pool of Jew hatred, racial hatred and misinformation which get millions of views, all without LLMs.
Even the precious few media sites that still retain a few shreds of journalistic integrity and have not yet gone full on Buzzfeed or worse 4chan, are filled with misinformation. Spending a few minutes on e.g. Media Matters, one of the sites you quote, I see it is chock full of misinformation generated by humans. When you live off of eyeballs, substitute political tribalism for principles and values, and have instantaneous deadlines, misinformation is a necessary outcome.
Sure LLMs makes it a bit easier to generate visual slop for the internet, crappy legal briefs, dumb downed articles & videos and word-garbage phds, but that is hardly a cause for alarm. These already exist in massive amounts without LLMs. The silver lining is that LLM generated stuff is so shitty, perhaps people will take it less seriously than the raging partisan headlines of more strait-laced media.
In any case, there are far more scary things about LLMs. The most critical of these is how dumb bosses are already using it as a terrible excuse to fire human workers, which will inevitably lead to all kinds of disasters, personal and societal, in the real world.
I have seen AI-created summaries of emails that displayed a complete misunderstanding of what was said. So, instead of reducing my workload, these 'summaries' increase it: first I get to read the summary and then the actual text to check if the summary is accurate or not. Some call this kind of thing 'progress'.
"If you want a picture of the future, imagine Big, Beautiful Billshit stamping on a human face—forever." We cannot ignore the intersection of Botshit and Billshit.
I asked Chat GPT to reformat my stage play into a stage play. On the second try, it inserted drones (not in my script) into the script. Nooooo!
What's wrong with that? We just had the fireworks here and they preceeded it with a drone show!
Now that's Chat GPT funny.
Thank god for small things. https://www.pbs.org/newshour/politics/senate-pulls-ai-regulatory-ban-from-gop-bill-after-complaints-from-states
Stories like these bode well for the well-trained mind. As perhaps many people come to rely on a technology that gives them wrong answers to the world around them, the value of a reasoning, educated, disciplined mind will soar in value. The bright side is that success will become easier for many.
That the information regurgitated by mechanized parrots is entirely unreliable is not surprising. Humans have been swimming in erroneous information as long as we have used language, and even though other animals are quite capable of utilizing deception, humans seem to have a particular knack for manufacturing falsehoods. Much of the raw material the mechanical parrots are fed is simply crap, so of course we can expect the output to regularly consist of effluent. A reasonable analogy would be to gather water for drinking from the sewer before it is filtered and treated. What the TechBros and their friends the FinanceBros are grafting onto our society is merely a slickly marketed Crap Regurgitation Device [CRD].
Of course the CRDs cannot recognize patent falsehoods. The basic design elements of CRDs preclude this. A CRD strings words together based on the most simplistic formula imaginable- which word or group of words most likely follows from the previous word or group of words. Even with myriad ad hoc prosthetics, sifting and sorting routines, various attenuators, the CRD remains as likely as not to produce crap because it cannot apply judgement, cannot engage critical reflection, cannot utilize basic logic. CRDs, no matter how many server columns are plugged into leaky nuclear reactors (as has been recently proposed here in my home state of Pennsylvania, like nothing could possibly go wrong with that), cannot manifest even the most rudimentary cognition and will never, ever will. CRDs will never, ever *simulate*, cognition. But there is no shortage of ignorant dupes who wish to see indications of cognitive functions where none are present. Mind you, humans have ascribed human qualities (and in many instances the residue of The Divine) to inanimate objects, especially our own artifacts, since our ancestors began scraping animal bones and chipping flint. This is nothing new, and we are, as a species, endlessly duped by it.
The problem is that neither the CRDs nor a large subset of humans are capable of recognizing crap being sprayed at their eyes.
Worse still, a very large cohort is *utterly indifferent to the dissemination of nonsensical splutterings*, and celebrates the most grotesque bigotry. They are not bothered by the falsehoods and the bigotry. They've shown us they will enthusiastically vote in favor of falsehoods and bigotry, repeatedly, if given the chance.
The only genuinely new feature of CRDs is the fabrication of a high-pressure valve connecting them to a high-speed, global Crap Distribution Network[CDN].
And we have all, passively, acquiesced to carrying the spigots of the CDN with us at all times.
I'm can't say I'm surprised in the least.
This whole Altman et al. vs Marcus clash wonderfully recapitulates the debate in the 70s between the "canny" optimistic structuralists who, using their structuralist models, of which LLMs are an embodiment, "are convinced that systematic knowledge is possible [through the application of structuralist models of language]", and the "uncanny", careful, pessimistic post-structuralists, who "claim to know only the impossibility of this knowledge" and repeatedly reveal that the "thread of logic [applied to these structuralist models] leads ... into regions which are alogical, absurd".
This isn't our first structuralist rodeo.
*Quotes are from "On Deconstruction" by Jonathan Culler, 1982, p22-23 of paperback 25th anniversary edition. Culler is in part quoting himself from Miller (1976).
Thank you for reporting the truth.
Makes the prospect of AI weaponry (viz CEO of Spotify lately funding) even more terrifying.
Hello Gary Marcus,
Little doubt that the modus operandi of LLM-based generators is not going to make AGI possible; hallucinations are only part of it. But there is by now quite a bit evidence that they can be educated to do truly marvelous, and __useful__ work. See the report at Scientific American, https://bit.ly/45HxVWK, where actually Open AI o4-mini impressed mathematicians worth their salt. There is also the bot Google developed (out of Gemini) which has proved a worthwhile co-researcher in a variety of life sciences experiments.
Now let us look somewhere else for AGI... I would not be surprised if it NEVER comes, yet LLM-based bots lead to real progress.
That's an interesting report, however that's all it is. Unless they publish these results so independent researchers can verify the claims, is essentially just hype.
The Microsoft one is funny though lol