Discussion about this post

User's avatar
Raj Iyer's avatar

It certainly seems the negative societal impacts of generative AI are far outpacing any potential benefits, and at a staggering speed. The pollution from LLMs threatens the internet as a useful communication medium, sucking up human-generated content and then turning out a firehose of at best mediocre, unreliable, generated swill.

If someone *wanted* to harm society and the economy with one efficient stroke, I doubt they could have come up with a better move than having OpenAI release ChatGPT in 2021, with grandiose claims (that continue to not hold up), and set off the rat race that's currently playing out.

Shakespearan tragedy seems too small to describe this. This is like the Illiad, or the Mahabharata. Humankind letting loose its worst instincts and causing mass suffering and harm.

Expand full comment
Art Keller's avatar

Funny how if you slog through LeCun's most recent appearance on Lex Fridman podcast, LeCun's now very skeptical of LLMs as the path to AGI or seriously advanced AI. The most dangerous thing about AI development is it promotes people who are highly technically proficient-which LeCun clearly is-but also unbelievably intellectually dishonest. They repeatedly hype AI's alleged capabilities while disparaging those concerned about safety and reliability. When the safety concerns turn out to be impossible to deny, the AI hype people move on and pretend they knew all along that, for example, LLMs are unreliable. No! You were shouting down people saying that just a few months ago as "doomers!" The people with tech skills AND shameless hype get billions in seed capital, and the people warning about safety concerns get belittled and scorned by people like Marc Andreessen, who claims AI will be the silver bullet for literally every problem humanity has. Meanwhile, LLMs can be hacked by people who know nothing about AI by prompting an LLM with a few sentences the model can't handle! Or a 30 year old computer graphic!

Expand full comment
67 more comments...

No posts