83 Comments

It’s not as if Silicon Valley really cares about facts or data (see: Dotcom bomb, et al, every few years). As long as there are incentives to burn capital without returns, there will be people offering matches.

Expand full comment

Everything doubles all the time, until it doesn't.

Expand full comment
Jun 5Liked by Gary Marcus

What I do not understand is why people keep talk about AGI as if it is destined to be reality?

Expand full comment

Because the technology appears to be possible, and there is no material reason to assume that it will be prohibitively unreasonable to create it.

Expand full comment

Can we agree that this is a subjective statement with an unclear meaning?

Expand full comment

Can we agree that nuclear power in 1930 was subjective statement with an unclear meaning?

Without speaking about hypotheticals that seems possible things like Manhattan project do not happen.

It is not destined to be reality, you are right. In same way nuclear bomb was not destined to be a reality(or that part where they thought it could ignite the atmosphere, hypothetical that did not happen).

There are already scientific experiments that show signs of AGI being possible, experiments that are year old by now. Check "Generative Agents: Interactive Simulacra of Human Behavior" from April of 2023 for example.

Expand full comment

I have nothing against speaking about hypotheticals and doing scientific experiments- I am a scientist doing this professionally for more than 25 years. My concern is, that so many seem to speak about this particular hypothetical as if it was a given destiny. This becomes even more worrying when combined with a deceptive technology (supported by immensely powerful and rich companies) making many people believe it can actually reason.

Expand full comment

The difference is that, in 1930, we had the nuclear model of the atom, we knew much about radioactivity, we knew that atoms of one element could transform into atoms of other elements by firing particles at them... in short, the science of fission was there and the challenge was engineering a chain reaction.

AGI, on the other hand, is wild speculation based on a strong assumption about the currently unknown relationship between computing machines and minds.

Expand full comment

Because ALL the computing power in the world devoted to inference is only super-inference, the output remains bounded. The method comes with its own limit.

Last, you're on "Marcus on AI." If I were to rename this substack, it would be "You Can't Get There From Here."

Expand full comment

Because even smart people do dumb things, and when they get contradictory information they double down and attack the information that contradicts their decision. The only cure is for us all to own that human brains are inherently flawed and prone to stupidity.

I've always been puzzled as to why anyone would want to create an AGI in the first place.

Expand full comment

Cognitive scientists might hope that it could help them understand intelligence. Techno optimist hope it will solve problems. Most truly want money, power, and control.

Expand full comment

The Human Brain Scan project was supposed to do that, and a billion Euro later they're no closer to figuring out how human cognition works.

The average techno-optimist is willfully ignorant in my experience, and usually far enough into the autism spectrum to not understand what the average person actually wants from technology.

Expand full comment

Oh Thomas, it's so easy. There's _lots_ of $$$ to be made. 💵💵💵😂

Expand full comment
Jun 5Liked by Gary Marcus

Won't AGI also need "vast amounts of nuclear fusion power"? (Cfr. Altman)

That means we'll get AGI "within thirty years".

Expand full comment
Jun 5Liked by Gary Marcus

Many people's paychecks depend on the scaling laws being real => this cult-like delusion persists.

Expand full comment

It is more than a cult thing though. We've seen in the last 5 years that "quantity has a quality of its own". Having a huge amount of data helps sort out many issues. But won't be enough, and what is left to do will not be quick.

Expand full comment

Good point Andy.

Plus, it's currently _impossible_ to wire a data warehouse to a cluster of quantum computers (high qubit error rates) so it's gonna even take longer.

Expand full comment

My trillion pound brain will start taking bets that we will not see (true) AGI in this decade. No solid foundation has been laid on that particular building.

Expand full comment

We won't see anything like AGI until well after we figure out how human brains work, and the EU has poured a billion Euro into the human brain scan project over the past decade and we don't seem to be anywhere close to figuring it out.

Expand full comment

Yes, and studying brains will only tell you how brain machinery works. The common assumption in the field is that consciousness and intelligence emerge from brains, or from mind. Unfortunately (or fortunately, in that it saves time), after decades of research, I found no evidence for that assumption. It's merely an assumption, like an axiom. So the research is circular, only finding what they started with. Further, when you really go into it deeply, it's not logical.

It would be a long essay to tease all this out, so I will may post it as an article, …since the discussions often get stuck on these points, and go around in circles…

Expand full comment

No lie. When I had my gut biome replaced after intestinal surgery I became a much happier and more assertive person. The two things might not have anything to do with each other, but it sounds like a lot of research is backing the idea up.

Expand full comment

https://chatdev.ai/

ChatDev stands as a virtual software company that operates through various intelligent agents holding different roles, including Chief Executive Officer, Chief Technology Officer, Programmer, Tester, and more. These agents form a multi-agent organizational structure and are united by a mission to "revolutionize the digital world through programming." The agents within ChatDev collaborate by participating in specialized functional seminars, including tasks such as designing, coding, testing, and documenting.

Expand full comment

I zoomed into the "what can chatdev do?" image at that website. I saw some samples of complex (!) advanced software engineering, such as a digital clock, a currency converter and a tic-tac-toe game. I am sure these projects require a multi-agent organizational structure to revolutionize our digital worlds.

Expand full comment

“Pay no attention to the man behind the curtain.” - The AI of Oz

Expand full comment

https://www.youtube.com/watch?v=ewLMYLCWvcI&t=189s this is an AI hospital .

1 minute 54 seconds

Watch the video and you will see AI specialist doctors, nurses, and administration, with AI patients as well.

Expand full comment

check minute 1 ... 54 seconds.

Also you are wrong. You can run https://chatdev.toscl.com/ on you google chrome interface as a plugin. It is very light. I guess people just need to catch up on Software development.

Expand full comment

Agentic systems are AGI. Do you know what chat dev is? Guess not

Expand full comment

Calling something "agentic" doesn't make it so (independently). It's goals are defined by humans, who goals are defined by consciousness.

Expand full comment

No, that is incorrect... https://www.youtube.com/watch?v=ewLMYLCWvcI&t=189s if you see, when you place agentic systems together, they actually outperform the models that are running them after they learn from each other.

GPT4 scores 83% on respiratory accuracy, on par with the average doctor? sure

GPT4 in an agentic system like the one I showed you, gives the agents the ability to self improve once they learn from each other in a virtual environment that allows them to raise that percentage to 93.04% (better than almost all doctors). this is a fact, and since it is self improving, it is strange that it does better than the LLM that is running it right? Not really because it is AGI.

Watch the video.

Expand full comment

Also, that has nothing to do with AGI – it was set up for a specific task, which is all it does. The rest is theoretical.

Expand full comment

I watched it. What I said stands.

What’s your definition of having agency?

It’s interesting, because there are other systems that are said to be goal-directed, such as a robot toy or a thermostat, in the sense that they get feedback and seek a goal or state, such as go towards a human, or towards a set temperature. But I would not by any stretch of the imagination assign independent agency to them.

In these “agentic” systems, I am not seeing anything that is in essence different than a thermostat or robotic toy or whatever, other than complexity, which does not in itself change the basic fact in that they were given their overall goals and purpose, or reason for existing in the first place from humans (as instruments in and of awareness) – in the examples I gave, to be a playmate or to govern temperature – and did not create this on their own. In pursuit of that, a system of course can be programmed with sub-goals, or to derive sub-goals on its own. But again, complexity doesn’t change the basic facts.

So is a thermostat agentic, in your definition?

Expand full comment

No, an agent is a IDE program on a computer that arranges an AI (like an LLM base) so that it's conversations can be stored, and examined in the context of what ever you set it up to do. This should operate on your system and do research, create documents, write code, and interact in real-time with its enviornment.

As an agent develops and gets more information from its environment, it will store that locally and the local version of the LLM IDE will improve based on what is done and needed.

The human no longer needs to interact with the agent in order for it to do the task or in a community of agents, they can run a company. Very useful.

it sets up "self-play" which allows the AI to get better than its LLM base.

So the AI improves itself.

I think you are confusing sentience with AGI.

Expand full comment

That graph presumes nearly infinite output from nVidia, electricity from who knows where.

It also, and much more importantly, ignores the growing evidence that transformers appear to have plateaued in capability and in some instances started to decline in capability.

Expand full comment
Jun 6Liked by Gary Marcus

https://nonint.com/2024/06/03/general-intelligence-2024/

From blog post by a current OpenAI engineer on GPT-4o team: "So my current estimate is 3-5 years for AGI. I’m leaning towards 3 for something that looks an awful lot like a generally intelligent, embodied agent (which I would personally call an AGI). Then a few more years to refine it to the point that we can convince the Gary Marcus’ of the world."

https://nonint.com/2024/05/14/gpt-4o/

Expand full comment
author

that’s hilarious!

Expand full comment
Jun 6·edited Jun 6

I was curious and read the general intelligence blog piece to see what a current OpenAI engineer thought and that's a pretty funny quote. Thanks for linking to it.

There are a couple of really big "I am confident we'll have a breakthrough" moments in that blog piece without any empirical evidence backing them up, so I am fairly confident that the estimate is purely speculative and probably wrong.

However, what struck me was that someone actively working at OpenAI on this stuff had such a casual, almost flippant tone, which is alarming given the gravity of the topic. The author's breezy confidence in AGI's imminent arrival, thrown out without substantive evidence, highlights a troubling lack of concern for the profound risks that AGI might actually bring with it.

This seems so myopic, purely focused on the technical achievement, and detached from the real-world consequences of that achievement.

I know it seems funny in some ways, but on reflection I actually find it a bit chilling. Not the prediction which seems largely speculative, but the attitude.

Expand full comment

This lack of scientific humility should give you a clue about how seriously you can take OpenAI now.

It's not chilling. It's breathtakingly stupid.

Expand full comment

Hey man, do you want to colonize outer space or not?

🤣

Expand full comment
Jun 7·edited Jun 7

Point taken.

I was thinking the prediction was not to be taken seriously, but that the idea of someone working on it who is so focused on the technical achievement they're borderline sociopathic was chilling. But they aren't going to get there without a healthy respect for science and the effort it takes to make those breakthroughs.

Expand full comment

OpenAI is a company full of skilled computer scientists LARPing as characters in an Asimov novel.

Expand full comment

There's an xkcd for that. https://xkcd.com/605/

Expand full comment

These discussions mask what I believe is an overlooked issue:

This whole kerfuffle exists because a bunch of neural net programmers lucked into a step-jump in parallel processing power that was largely driven by the computer gaming market. Suddenly a bunch of things that weren't possible before became (expensively) practical. As I've said before, it's the computing equivalent of the old aerospace saying that you can fly a brick if you have a big enough engine.

The issue in my mind is this: are neural nets the best use of that computing power and its associated environmental costs? By "best" I mean benefit to society, not putting money in VC's bank accounts. The hype masters of "AI" are issuing a deafening chant of "YES! YES! YES!" and nobody seems to be looking at alternatives to these spectacularly inefficient architectures.

I'll give you an example. If you let me have 25% of the machine cycles for a given application, I can apply 50 year old technology to make that application very hard to attack. It will benchmark at 75% the rate of an insecure version an so be sneered at by the technical press but if machine cycles are as cheap and available as the hype masters would have you believe, why don't we spend them hardening systems instead of nonconsensual pornography depicting real people?

Expand full comment

We are all apes in suits. Pornography hits closer to home than intellectual pragmatic challenges. We better not wire any electrodes into our own brains lest we starve ourselves to death through orgasms... hold that thought - what's been happening with NeuroLink, has anyone followed up recently?

Expand full comment
Jun 5Liked by Gary Marcus

The problems to solve are immense. Even if chatbots get something right, it is at most general trends. Lots of fine-grained work to do.

Expand full comment

It was an OK 165 page manifesto.

Nothing spectacular or new in it.

A bit sensationalist, because it assumes 'hard takeoff' scenarios. While it is plausible that there will be an 'intelligence explosion', it will happen over decades, not months or years. The way it is described in the manifesto (e.g. 'billions of technicians working simultaneously', paraphrasing) is largely inaccurate. I did not learn anything new or novel from it.

I know I'm going to come across as a hater ('oh DEI and woke culture'), but kudos to some people for using their white privilege well. Think about it -- this guy was FIRED from OpenAI for leaking confidential information, and not only did he seem unbothered by it, he published a short manifesto, started his AGI investment fund, signed a pledge with famous AI researchers about whistleblower protections, went on a famous podcast, and so on. He does not have a "wow" level of intelligence when you listen to him or read his thoughts. He is an ordinary white German guy.

Imagine if I, a Latino brown guy, got fired from an AI company for leaking confidential information. That would mean shame, career over, depression, rejection by my peers. I mean, before even been given a chance to work at one of the top companies I'm already being rejected by AI researchers or AI employees at big companies when I reach out to them here on Substack--maybe they don't know who I am or the magnitude of my skills and abilities and potential. This guy suffers zero consequences.

I want you to read this and not think, 'Oh, you're a loser,' but to realize that white privilege is real, and this is a clear example of it.

Having said that, I hope you understand that I am not jealous of this guy Leopold. I am just in awe of the fact that he was fired for leaking confidential information from a company and suffers no consequences while his peers seem to elevate him. I would love to see how people would rally around a black or Latino man in the same situation.

I hope this doesn't offend you, but I needed to get this off my chest.

I want to see more diversity in AI, and yes, that includes seeing more people who look like me in the field. It also makes me feel safe when I approach people and applications. I also want to see variety, it's a bit annoying that many of the relevant people in the field ascribe to these "effective altruism" and "rationalist" cults. I mean, come on, bro, do good locally, to your neighbor, colleagues and friends, you do not need to save the world. It's a Messiah complex man. This is also such a turn off that prevents me from approaching more people in AI (especially in safety and security). I want people to not dismiss my messages and help me get a job at one of these top companies--not because I'm Latino and a diversity quota, but because I have the intelligence it takes to help develop and work with AI. I will be professional and ethical and not leak confidential information that could put companies and lives at risk.

Sorry for the stream of consciousness and 'identity politics', but I think this is a clear example of 'white privilege' for those who do not believe it exists.

Have a great day people.

I know I need to be more fearless and make more moves. I am working on it. Do not judge too harshly please :).

Expand full comment
author

The reason he was fired sounds marginal but I think it is fair to wonder whether people in other life circumstances would have landed so well on their feet so quickly.

Expand full comment
Jun 13·edited Jun 13

With you on the diversity part Dr Daniel.

For starters there aren't enough women working on the codebases of TensorFlow, PyTorch or JAX or on the vast ecosystems that surround them.

That extends to GPU and TPU hardware design, as well as autonomous robotics. It's really hard to find a keynote or demo from any hardware vendor with a leading female presenter, let alone one of colour.

And it's not hard to correlate the failures of ML efforts to account women in model training and inference with the lack of female participation in this field.

Expand full comment
Jun 6·edited Jun 6

Oooh, and another nice one: his graph suggests GPT4 is 100 times as powerful as GPT3. So, 17.5 trillion parameters? OpenAI has never publicly stated how many parameters GPT4 has, but 17.5 trillion? And of course parameter count is a useless value, you need to know parameter volume too. Parameter volume says something about the total number of bit and the counts says something about the number of elements in the architecture. Gemini Ultra, for instance uses int8 (or a byte) as a very very imprecise parameter type, but then uses many more of them, so more layers/heads).

To be fair, he expects 6 orders of magnitude, but only 2 orders from physical compute. Het expects the efficiency of the algorithms to provide a 100-fold improvement and the powers of the algorithms another 100-fold. Right.

Expand full comment

O ye of little faith! One doth not question ye Dong Curve, ever proudly erect! Let those whom hath eyes see clear as day: line goeth up! Only up! One dimension be all ye need!

Expand full comment

Self-regulation in the autonomous vehicles space has also been a farce.

Meanwhile neural net-powered cars are now ironically more dangerous than ever.

We _need_ to get the “small stuff” right first.

Car computers cannot even perform real time, high precision, multidimensional, multivariate quartile regression (critical for autonomous automotive ML), yet preschoolers can adjust their balance when doing deft K-pop moves.

And, speaking about autonomous machines, a Python error thrown right in the middle of a robot serving hot tea to seniors is, uh, unwelcome.

AGI? Pfffffft.

https://au.news.yahoo.com/self-driving-tesla-almost-crashes-212309210.html

Expand full comment