25 Comments
Jun 4·edited Jun 4Liked by Gary Marcus

The author of the Blood in the Machine book about the Luddites, Brian Merchant, wrote an interesting piece about AI being shitty but still being able to take over jobs (it doesn't have to be good, the bosses need to believe — correctly or incorrectly — it is good enough). Like graphics and creative writing being replaced by GenAI. The result is shitty, but it's a lot cheaper. He makes interesting comparisons with the rapid growth phase of the industrial revolution. https://www.bloodinthemachine.com/p/understanding-the-real-threat-generative

In short, there will be productivity but — apart from agility — we pay for it with quality getting a 'cheap' product as result. The outcome of the GenAI adoption will be: humanity goes 'cheap'. Sounds like a worthwhile insight.

Expand full comment

AI is all about marginalizing the human.

Expand full comment

If AI does doom humanity, this is how. Not because it becomes a Terminator-style genius, but because some corporate dope puts a stupid program in charge of something important and it fails.

Expand full comment
founding

I teach introduction to artificial intelligence to lehigh non-tech students and CPA's in a continuing education course and always start with what is your definition of intelligence. For humans it is definately not data right to intelligence which is what all GPT systems do. It has a middle process called data to information to intelligence which has numerous more analysis mostly on meaning, trust and accuracy of the data. I truly believe no generative language can do this without constant human involvement and generative language as the basis for AI will be obsolute withing a few years as new approaches with neuron processing become available to create information from data. There are a number of examples where this is starting to occur with start ups and inside Google that would love your thoughts on

Expand full comment
Jun 4Liked by Gary Marcus

AI has become a monocrop. Are there any Doug Lenats out there attempting to break new ground? I'd like to find adventurous research beyond the Big AI billionaires. What's the most adventurous work taking place these days?

Expand full comment
Jun 4Liked by Gary Marcus

Meanwhile we all sigh with relief: "Finally."

Also I don't think it matters to them that the tech is unreliable, I expect them to give it is much access as possible and just hope for the best and let the public foot the bill.

Expand full comment

Can they just release the series when it's done? Binge watching is the only way I can get really into anything anymore.

Expand full comment

Hi Gary. I've been reading your newsletters for a few months now. You're a bona-fide AI bigwig. But when you wish "AI these days was more about research than money, politics, and ego" then that's also down to you. Currently, I feel like I'm reading a gossip column. You can do so much better.

Expand full comment
author

quick, name all the major advances on basic capability in the last two years.

Expand full comment

If none, then better to say nothing. But the majority of successful organisations, including their innovation and resulting products, outlast the people in them. The personal is ultimately ephemeral and peripheral to technological progress. Maybe dig into the incremental advances benefitting customers of industries developing embedded AI tools and systems. For example, within aviation, medical devices, creative industries. Even if these aren't worthy, then say why you think so. Might that be less stressful to write (and read) than the current emotionally driven angst-laden content?

Expand full comment
author

Would you tell that to the NYT, that just now broke this, confirming what I had been saying? https://x.com/kevinroose/status/1797992266277285933?s=61 please unsubscribe if you are unhappy with my covering what i think is important for society.

Expand full comment

Newsworthy doesn't always mean future worthy, stock worthy or tech worthy. I really don't want to unsubscribe (and I see I'm not the only one noting your shift in emphasis). When you write well then it's the best around. But veering into monomania isn't attractive. That's my feedback and I feel that's all I need – and want – to say on the matter as just one of your many readers. Thank you for taking the time to acknowledge what I said.

Expand full comment

Use your intellect properly Gary. We all get it that Sam is untrustworthy. But in an industry as nuanced and full of genuine news as generative AI is, when you keep just concentrating all your attention on Sam and Yann, you look increasingly like a gossip magazine editor with personal axes to grind. It's a sad look for you.

Expand full comment
author

my view, which I guess you missed, but which I have tried to express, is that (a) OpenAI is potentialy dangerous and therefore its credibility is quite relevant to society and (b) that there *is* more gossip than genuinely new research right now, which is part of what I was conveying. but feel free to unsubscribe.

Expand full comment

Have certainly heard that view, loud and clear. It’s all you post about these days. As evidenced by two more posts you’ve published since I wrote the above 12 hours ago, about (surprise!) OpenAI and Yann.

For someone who calls himself a scientist “there is more gossip than research” is a straight up ridiculous statement to make.

How about, off the top of my head, using a scientific approach to examine topics like… the energy advancements we’ll need to keep pace with the hardware requirements of genAI in the next decade, Deepmind’s Alphafold work, and/or the possibilities/concerns of the push for ubiquitous AI agents. To name just 3 topics that are more interesting than your tired attacks on the same individuals.

Yes, I could unsubscribe. But for the moment it’s more enjoyable holding you to account (something you clearly feel the need to do to others) in your own echo chamber of yes men you’ve created in here.

Expand full comment

"The AI Revolution is Already Losing Steam" perhaps in the eyes of their customers, the press, and/or the general public, but boy it's hard to find applicable job listings on LinkedIn that aren't LLM or otherwise AI focused.

Expand full comment

Hey Gary, stop caring what Kara thinks. If you’ve ever been to a Silicon Valley insiders’ party you’d see her quite comfortable with all of these CEOs she claims to be so hard on, being quite the hanger-oner, basically, nothing like she portrays. Her tough talk is more her shtick than any real “holding these people to account” type of relationship. She displays all the qualities we so harshly judge the White House journalists by, too cozy with the people she purports to be reporting on and asking the tough questions of. Though these days, I’d call her less of a journalist and more show host entertaining her audience with her pseudo tough snide remarks 🤣

Expand full comment

Let me try again. Can the current limitations of AI be improved in the future or are the endemic to the very essence of LLMs? This is a separate issue from Open AI's corruption and moral limitations.

Expand full comment

Can't wait to learn who's going to get voted off the island next week.

Expand full comment

And the reality TV drama of the third wave of AI is happening on X.

Expand full comment

Perhaps you can occasionally write about good things happening in AI? Not everything is Sam Altman and Open AI.

Expand full comment
author

which? pretty much every new model seems to have the same shortcomings.

Expand full comment

Comparisons to Barnum and Buckminster Fuller aside, when I hear Altman, I now think of Milo Minderbinder from the novel "Catch-22".

Expand full comment

Given the crypto mess and now this, I know some ppl might call it unfair but I think the simplest solution is just to pass a law saying anyone who dropped out of Stanford and who’s name is Sam should be banned from senior roles in the tech sector

Expand full comment