48 Comments
Feb 9Liked by Gary Marcus

Maybe ChatGPT did the trillion dollar math?

What are all these tasks that people think they can automate? At some point, the human has to at least have intentions that they communicate to the device? All of this starts to make my head hurt.

And we forget that automation decreases the human's incentive to pay attention and thus catch problems (currently now seen in Tesla's having the most accidents of any brand, based on insurance claims).

Expand full comment
author

first sentence is hilarious!

Expand full comment
Feb 9Liked by Gary Marcus

Saw the $7T Altman headline and my eyes involuntarily rolled up. Also, no way in hell I'd ever let any AI agent take over any of my devices.

Orwell is twitching in his grave. And I don't mean streaming.

Expand full comment

If "spinning in your grave" were a literal thing, Orwell would be putting out enough RPMs and torque to power an aircraft carrier.

Expand full comment

He's just getting started...

Expand full comment
author

😂

Expand full comment

Birgitte, exactly, me either - zero need to willingly invite stupidity into my life.

Expand full comment
Feb 9Liked by Gary Marcus

My prediction: nobody is going to give Altman, or anybody else $7 trillion anytime soon. That is simply an unfathomable amount of money (The whole of US annual GDP is only about $23 trillion). Nobody has the organizational capacity to single-handedly direct the spending of that much money, even spread out over 10 years or so. How many people do you think you are going to hire? Are there even that many qualified AI engineers in existence? What will your onboarding process look like for all those people? Or maybe you are going to build a titanic computing infrastructure. That much money will buy a staggering number of servers. What are you going to power them with? How long will it take you to buy enough AC units to cool it all?

Here's a fun fact. Remember ARRA (the American Recovery and Reinvestment Act of 2009)? The total size of that program was between $700 billion and $800 billion, over 10 years (though front-loaded, obviously). Even though it was intended to be spent on "shovel-ready" projects, the US Government had persistent problems getting the money out the door, especially in the first year. Nobody knows how to spend money like the US Government, and if even they have trouble spending $700 billion of found money, then what chance does Sam Altman have at (productively) spending 10 times as much?

Expand full comment
author

all that would have been great to add!

Expand full comment

The largest portion of the ARRA was spent on tax relief for small businesses. After that, extensions of unemployment benefits, Medicaid supplementation, and fiscal relief to states to help pay their employees. That accounted for around 60-70% of the total. https://www.city-data.com/forum/economics/552656-stimulus-pie-chart.html

So there's your "spending."

Total was $787 billion. About half of the total amount that was originally proposed- the Democrats basically cut the amount in half themselves before introducing it, in order to not be attacked as deficit spenders. (Which happened anyway.)

"Shovel-ready projects" was a soundbite. One made all the more ironic by the fact that all the GOP Senators led by Mitch McConnell had to do in 2009-2010 was threaten a filibuster in order to obstruct or delay routine funding measures--including funding for infrastructure projects like water treatment plant maintenance and expansion--at the same time the ARRA was being put forth in Congress. https://www.google.com/books/edition/The_Betrayal/7qleEAAAQBAJ?hl=en&gbpv=1&dq=republican+senators+threaten+filibuster++2010+mcconnell+funding+bills+obama&pg=PA27&printsec=frontcover

It's inherently difficult for the Federal government to allocate funds on new spending projects because the Congress has the power of the purse. Between partisan political maneuvering and individual earmarks in bills and amendments, the situation has a way of getting complicated.

Expand full comment

Thanks for the additional information. Note that these considerations make the case against Altman's $7 trillion plan even stronger. Tax relief, unemployment benefits, and similar things wouldn't be available options for Altman's AI initiative; he would have to actually find something useful to spend it on.

Expand full comment
Feb 9Liked by Gary Marcus

Taking over the device is probably the dumbest idea ever, except if the system taking over is supersmart and super ethical. Neither of these are anywhere near reality.

The 7 trillion definitely has a bitcoin 'number go up' vibe. Also has a Sam Bankman-Fried vibe as in, he might actually believe going insanely big will turn out well (I suspect SBF convinced himself of the same — which doesn't change your actual accountability). Or this is part of his super grandiose dreams that include limitless free energy through fusion or fission (he is invested in both) and now new chips paradigms (which, I must admit, are necessary to do anything even resembling true intelligence).

This is not being out of the box. This is being out of your mind.

Expand full comment
author

🤣

Expand full comment

“Out of your mind” >> my thoughts exactly. I’m no psychologist but it does feel like there’s a delusional disorder of some kind

Expand full comment
Feb 9Liked by Gary Marcus

Few ideas are as dumb as that of an LLM-based agent (with the possible exception of an agentic LLM-based robot). The only upside is that any such agent will itself be too dumb to represent (by itself) an existential risk. Nevertheless, alignment is HARD and requires a very high level of intelligence (knowledge, understanding, problem-solving ability, etc) on the part of the agent, which any LLM-based agent will simply not posses. Any such agent will therefore be very poorly aligned, i.e. substantially misaligned, BY DEFINITION, and, if deployed at scale, would be certain to inflict massive (albeit most likely non-existential) societal harm. Anyone who actually understands AGI (and I'm not sure that I would include anyone at OpenAI in this set, not even Shane Legg, and certainly not Sam Altman) already knows IN ADVANCE that this societal harm will 100% occur if any such system is deployed at scale. It's pretty much the second-worst AI nightmare imaginable -- any AI person worth their salt knows this, and yet they do it anyway.

Expand full comment

What can a machine intelligence align to? What are the requirements of the simulacrum to be constructed in order for AI to obtain a trustworthy "alignment" with the priorities of humans, in the embodied mortal animal sense?

Setting aside the question of whether such a simulacrum can be built, do any of the AI programmers even have a comprehensive idea of those requirements, and their ramifications?

Expand full comment

Maximal alignment is both surprisingly complex to define, and even more difficult to implement. The first ~40 pages of my draft AGI paper (https://www.bigmother.ai) define maximal alignment in some detail, starting from zero, and is designed to be accessible to a non-technical reader. I would say that most AI researchers do NOT have a comprehensive understanding of alignment and what is required to achieve it.

Expand full comment
Feb 10Liked by Gary Marcus

I think the most terrifying thing about the takeover AI is that folks will have it on their devices without knowing it. Imagine you're a big enterprise corp (like, say, Salesforce) and you've been trying to get some value out of GPT. And your team keeps telling you - well, we could bake it into our infrastructure but we have to entirely rebuild everything from the ground up to make it really do what you want it to do. You turn to OpenAI and say, build us an agent that can click through our legacy interface.

Suddenly, if you're a user of one of these tools, you get a pop up agreeing to some new terms, click it, download a thing, and your device is now taken over by a Salesforce bot that can (accidentally or maliciously) do much much more than save you a few clicks on that webpage.

Expand full comment
author

💯

Expand full comment
Feb 9Liked by Gary Marcus

If the device takeover thing actually happens, it will last about 1 week until all the people dumb enough to give ChatGPT their iPhone come out with their horror stories and everyone collectively realizes that hallucinations matter and limit the usefulness of LLMs.

Expand full comment

I'm wondering if the Rabbit of CES fame will give us an idea of how (likely not) viable this idea is currently this summer, assuming they ship.

Expand full comment

You know, how much real problems could we solve with $7 trillion instead of creating an unimaginable number of AI chips? Oh, wait, yes I understand, I cannot sell that with the idea of making a profit. sorry, my bad.

Expand full comment
author
Feb 9·edited Feb 9Author

Bingo! can’t believe i didn’t think to say that.

Expand full comment
Feb 10Liked by Gary Marcus

The device takeover issue could actually be a fairly big deal when considering devices like Apple’s Vision Pro. Turning what I would consider mainly an entertainment & escapism platform, into something more. The worse part is that users would likely welcome the agent assistance and have a harder time detecting possible functionality errors. The perceived convenience will drive the usage of these and open up the can of inevitable worms this tech will

lead to.

Expand full comment
Feb 10Liked by Gary Marcus

One of the worst things about our world today is that very rich people are taken seriously even if they are obviously talking nonsense. All I can think of is that somebody should ask an image generator to draw a meme of Altman doing the Dr Evil pose.

And yes, before I let an LLM agent take over my device I'd rather go back to not having a phone. But I find it difficult to believe that it would ever be rolled out with permissions to automatically pay and respond to emails and suchlike. That would go wrong so badly and so quickly for so many people that it would be rolled back the next week.

Expand full comment

There is clearly something about how a CEO / founder is more likely to be delusional with a massive ego. And across so many startups, we end up with delusional people in control of important and influential companies. The numbers…

If anything it’s another good argument for reasonable regulation. We cannot trust companies to make the best decisions for society because they are run by insane people. We want them to question norms and explore new concepts but we also need to recognise they very well could blow up the whole world, gleefully.

Expand full comment

No person or collection of people, not even gullible Gulf oil sheiks, are cutting a check for even one measly trillion if Sam Altman and his coreligionists can't come up with a better use case for AI than writing (mediocre) fiction, which is really the only thing you can trust LLMs with right now, given the hallucinations, intentionally deceptive outputs, and crappy security. Marc Andreessen tried to spin deceptive AI as being "gloriously uncontrollable," recently-- but what is the business use case for your "agent" lying to you? Maybe I'm not smart enough to understand how that would add to my life or my business?

Expand full comment

BTW, Andresseen's on a new tear. As of 2 hours ago, he tweeted a parody white paper where AI is developed by a company concerned about ethics, responsibility, and AI not being misused, so the (alleged to be a) joke is, it is completely unusable. The crazy subtext, "If you don't let us develop AI with zero restraints or guardrails, you get an unusable mess! And how DARE you Luddite plebes whine about safety or ethics." https://twitter.com/pmarca/status/1756096824300249172

Expand full comment

Today's Americans: "I'll have my agent call your agent."

There's nothing new with OpenAI's AI agents. In spirit, it’s an idea similar to Alexa or Google Assistant, an all-in-one App.

This is the same approach tech startup Rabbit (with 17 employees in the UK) used to train AI agents in its R1 device, an neon orange credit-card sized device selling for $199. The large action models, or LAMs are trained by humans interacting with Apps.

So what are some Earth-shattering and complex human uses requiring special 007 agents? "Order a pepperoni pizza from Dominos" "Call a Uber to go from home to work." "Play Daft Punk" "Generate a cat" "Suggest a recipe from the fridge"

AI is a revolution awright... a revolution in eye rolling e_e

Expand full comment
Feb 10Liked by Gary Marcus

With all the tribulations of year 2023, Open AI has lost some momentum and its CEO tries to regain it by making explosive, or desperate, announcements. It is a kind of bluff, I suppose. Nevertheless, it brings to our attention the simple fact, that the dissemination of advanced AI tools cannot be controlled by private companies and driven only by business. There must be a state close control on these products which may be potentially dangerous, it is an administration which should approve of them before commercialization. If not secured AI-driven products are released, a lot of people will buy them, for their supposed great convenience, despite the warnings. We cannot just rely on the probity of companies, or on the self-awareness of customers.

Expand full comment
Feb 10Liked by Gary Marcus

Lol (not really), Gary, about your $2000 transfer command note :)

Hands with >5 fingers, clueless elephant pics etc have no real-life consequences - but when actual and irreversible harm results, it will be a matter of time before the public figures out that the Emperor has no clothes.

Expand full comment
Feb 10Liked by Gary Marcus

" In short, I am putting LLM-based agents distributed to millions of customers at the top of my list of most dangerous ideas of 2024."

Ghost town Google Plus gives this statement a +1

Expand full comment