45 Comments
Feb 9Liked by Gary Marcus

Maybe ChatGPT did the trillion dollar math?

What are all these tasks that people think they can automate? At some point, the human has to at least have intentions that they communicate to the device? All of this starts to make my head hurt.

And we forget that automation decreases the human's incentive to pay attention and thus catch problems (currently now seen in Tesla's having the most accidents of any brand, based on insurance claims).

Expand full comment
author

first sentence is hilarious!

Expand full comment
Feb 9Liked by Gary Marcus

Saw the $7T Altman headline and my eyes involuntarily rolled up. Also, no way in hell I'd ever let any AI agent take over any of my devices.

Orwell is twitching in his grave. And I don't mean streaming.

Expand full comment

If "spinning in your grave" were a literal thing, Orwell would be putting out enough RPMs and torque to power an aircraft carrier.

Expand full comment

He's just getting started...

Expand full comment
author

😂

Expand full comment

Birgitte, exactly, me either - zero need to willingly invite stupidity into my life.

Expand full comment

Taking over the device is probably the dumbest idea ever, except if the system taking over is supersmart and super ethical. Neither of these are anywhere near reality.

The 7 trillion definitely has a bitcoin 'number go up' vibe. Also has a Sam Bankman-Fried vibe as in, he might actually believe going insanely big will turn out well (I suspect SBF convinced himself of the same — which doesn't change your actual accountability). Or this is part of his super grandiose dreams that include limitless free energy through fusion or fission (he is invested in both) and now new chips paradigms (which, I must admit, are necessary to do anything even resembling true intelligence).

This is not being out of the box. This is being out of your mind.

Expand full comment
author

🤣

Expand full comment

“Out of your mind” >> my thoughts exactly. I’m no psychologist but it does feel like there’s a delusional disorder of some kind

Expand full comment
Feb 9Liked by Gary Marcus

Few ideas are as dumb as that of an LLM-based agent (with the possible exception of an agentic LLM-based robot). The only upside is that any such agent will itself be too dumb to represent (by itself) an existential risk. Nevertheless, alignment is HARD and requires a very high level of intelligence (knowledge, understanding, problem-solving ability, etc) on the part of the agent, which any LLM-based agent will simply not posses. Any such agent will therefore be very poorly aligned, i.e. substantially misaligned, BY DEFINITION, and, if deployed at scale, would be certain to inflict massive (albeit most likely non-existential) societal harm. Anyone who actually understands AGI (and I'm not sure that I would include anyone at OpenAI in this set, not even Shane Legg, and certainly not Sam Altman) already knows IN ADVANCE that this societal harm will 100% occur if any such system is deployed at scale. It's pretty much the second-worst AI nightmare imaginable -- any AI person worth their salt knows this, and yet they do it anyway.

Expand full comment

What can a machine intelligence align to? What are the requirements of the simulacrum to be constructed in order for AI to obtain a trustworthy "alignment" with the priorities of humans, in the embodied mortal animal sense?

Setting aside the question of whether such a simulacrum can be built, do any of the AI programmers even have a comprehensive idea of those requirements, and their ramifications?

Expand full comment

Maximal alignment is both surprisingly complex to define, and even more difficult to implement. The first ~40 pages of my draft AGI paper (https://www.bigmother.ai) define maximal alignment in some detail, starting from zero, and is designed to be accessible to a non-technical reader. I would say that most AI researchers do NOT have a comprehensive understanding of alignment and what is required to achieve it.

Expand full comment
Feb 10Liked by Gary Marcus

I think the most terrifying thing about the takeover AI is that folks will have it on their devices without knowing it. Imagine you're a big enterprise corp (like, say, Salesforce) and you've been trying to get some value out of GPT. And your team keeps telling you - well, we could bake it into our infrastructure but we have to entirely rebuild everything from the ground up to make it really do what you want it to do. You turn to OpenAI and say, build us an agent that can click through our legacy interface.

Suddenly, if you're a user of one of these tools, you get a pop up agreeing to some new terms, click it, download a thing, and your device is now taken over by a Salesforce bot that can (accidentally or maliciously) do much much more than save you a few clicks on that webpage.

Expand full comment
author

💯

Expand full comment

You know, how much real problems could we solve with $7 trillion instead of creating an unimaginable number of AI chips? Oh, wait, yes I understand, I cannot sell that with the idea of making a profit. sorry, my bad.

Expand full comment
author
Feb 9·edited Feb 9Author

Bingo! can’t believe i didn’t think to say that.

Expand full comment
Feb 10Liked by Gary Marcus

The device takeover issue could actually be a fairly big deal when considering devices like Apple’s Vision Pro. Turning what I would consider mainly an entertainment & escapism platform, into something more. The worse part is that users would likely welcome the agent assistance and have a harder time detecting possible functionality errors. The perceived convenience will drive the usage of these and open up the can of inevitable worms this tech will

lead to.

Expand full comment
Feb 10Liked by Gary Marcus

One of the worst things about our world today is that very rich people are taken seriously even if they are obviously talking nonsense. All I can think of is that somebody should ask an image generator to draw a meme of Altman doing the Dr Evil pose.

And yes, before I let an LLM agent take over my device I'd rather go back to not having a phone. But I find it difficult to believe that it would ever be rolled out with permissions to automatically pay and respond to emails and suchlike. That would go wrong so badly and so quickly for so many people that it would be rolled back the next week.

Expand full comment

There is clearly something about how a CEO / founder is more likely to be delusional with a massive ego. And across so many startups, we end up with delusional people in control of important and influential companies. The numbers…

If anything it’s another good argument for reasonable regulation. We cannot trust companies to make the best decisions for society because they are run by insane people. We want them to question norms and explore new concepts but we also need to recognise they very well could blow up the whole world, gleefully.

Expand full comment

No person or collection of people, not even gullible Gulf oil sheiks, are cutting a check for even one measly trillion if Sam Altman and his coreligionists can't come up with a better use case for AI than writing (mediocre) fiction, which is really the only thing you can trust LLMs with right now, given the hallucinations, intentionally deceptive outputs, and crappy security. Marc Andreessen tried to spin deceptive AI as being "gloriously uncontrollable," recently-- but what is the business use case for your "agent" lying to you? Maybe I'm not smart enough to understand how that would add to my life or my business?

Expand full comment

BTW, Andresseen's on a new tear. As of 2 hours ago, he tweeted a parody white paper where AI is developed by a company concerned about ethics, responsibility, and AI not being misused, so the (alleged to be a) joke is, it is completely unusable. The crazy subtext, "If you don't let us develop AI with zero restraints or guardrails, you get an unusable mess! And how DARE you Luddite plebes whine about safety or ethics." https://twitter.com/pmarca/status/1756096824300249172

Expand full comment

Today's Americans: "I'll have my agent call your agent."

There's nothing new with OpenAI's AI agents. In spirit, it’s an idea similar to Alexa or Google Assistant, an all-in-one App.

This is the same approach tech startup Rabbit (with 17 employees in the UK) used to train AI agents in its R1 device, an neon orange credit-card sized device selling for $199. The large action models, or LAMs are trained by humans interacting with Apps.

So what are some Earth-shattering and complex human uses requiring special 007 agents? "Order a pepperoni pizza from Dominos" "Call a Uber to go from home to work." "Play Daft Punk" "Generate a cat" "Suggest a recipe from the fridge"

AI is a revolution awright... a revolution in eye rolling e_e

Expand full comment
Feb 10Liked by Gary Marcus

With all the tribulations of year 2023, Open AI has lost some momentum and its CEO tries to regain it by making explosive, or desperate, announcements. It is a kind of bluff, I suppose. Nevertheless, it brings to our attention the simple fact, that the dissemination of advanced AI tools cannot be controlled by private companies and driven only by business. There must be a state close control on these products which may be potentially dangerous, it is an administration which should approve of them before commercialization. If not secured AI-driven products are released, a lot of people will buy them, for their supposed great convenience, despite the warnings. We cannot just rely on the probity of companies, or on the self-awareness of customers.

Expand full comment
Feb 10Liked by Gary Marcus

Lol (not really), Gary, about your $2000 transfer command note :)

Hands with >5 fingers, clueless elephant pics etc have no real-life consequences - but when actual and irreversible harm results, it will be a matter of time before the public figures out that the Emperor has no clothes.

Expand full comment

" In short, I am putting LLM-based agents distributed to millions of customers at the top of my list of most dangerous ideas of 2024."

Ghost town Google Plus gives this statement a +1

Expand full comment

"“OpenAI is developing a form of agent software to automate complex tasks by effectively taking over a customer’s device.... requests [that] would trigger the agent to perform the clicks, cursor movements, text typing and other actions humans take as they work with different apps.” Say what? Take over user devices?"

Basically like Remote Desktop, but saying the user is responsible for the actions of autonomous AI, potentially. I can imagine this defense being used. "It wasn't me that dump-posted on your website- it was the one-armed AI!"

Expand full comment
author

Maybe Meta has been beta testing this for years, Yann?

Expand full comment

Honestly, I did lose control of my Windows 10 mouse a couple weeks ago. It was like a remote desktop session took over my PC, and I didn't even click on any popups. The mouse moved around, then 20 seconds later, I got it back. It hasn't happened to me before, but I've read it can.

Expand full comment
Feb 9Liked by Gary Marcus

The idea of an agent having root access to a computer connected to the internet is terrifying. And millions of devices running agents more so. The AI botnet that ends us all. But in a more serious way I totally agree on the 10x investment returns. It seems I’m unlikely they will raise that amount. Might as well buy TSMC for 2x market cap

Expand full comment