70 Comments
Mar 30Liked by Gary Marcus

"A $100B LLM is still an LLM"

100% this. There have been no significant paradigm shifts that move us away from hallucinations and similar problems with LLMs.

Expand full comment

History repeats itself again. During the previous cycle in the 80ies-90ies, big money were spent on infrastructure and research programs, at that time by governments - Japan's 5th gen computers and the US Strategic Computing Initiative. Just like today, they didn't have a clear strategy back then, the hope was that AI would somehow "emerge" :)

Expand full comment

Microsoft in that period also wasted tens-hundreds of billions on trying to get AI. Bill Gates was a big believer. He is again. Microsoft is again. It is part of the 'genes' of the company.

In Europe, amounts of that scale were spent on Eurotra (computer translation, something that was meant to solve a huge bottleneck in Europe).

Expand full comment

The difference between now and the 1980s is that modern machine learning methods have a 10 year very successful track record in the commercial sector, for voice recognition, vision, protein folding, AlphaGo, and recently, language, and art generation.

If Microsoft and OpenAI are pragmatic, they will not only add more parameters to LLM models, but research algorithms that can augment existing frameworks.

If the machine is built in stages, with money allocated gradually over the years, and promising results coming out in the meantime, it will be turn out to be a good bet. There is always risk, of course, but if you don't take calculated risks, you fail.

Expand full comment
Mar 31·edited Mar 31Liked by Gary Marcus

back then there were also quite a few commercial applications - expert systems, speech recognition (yup, Dragon Systems was founded in 1982, check it out, very interesting story and quite educational from an entrepreneurial point of view), Samuel's Checkers playing program (1959) (of which AlphaGo is basically just an improvement) was already old story, logic programming and Prolog (1972) was already getting old too just like neural networks are today.

btw, it's been already more than 5 years and I am not aware of any major breakthroughs in the protein folding arena and AlphaFold was supposed to revolutionize the field

Expand full comment

We did manage to make speech recognition work reliably. It took decades. And amusingly it took statistics, neural nets, and lots of data to make it work, not clever algorithms.

It is grossly unfair to call "AlphaGo is basically just an improvement) was already old story, logic programming and Prolog". You could as well call AGI "basically an improvement" over expert systems.

Go is astronomically more complex than checkers, and much cleverness (and neural nets) were needed to make it work.

Truth is, the field of AI has advanced an immense lot, even despite setbacks, and the last 15 years have been miraculous. Of course there's more work to do, likely a decade or two.

Expand full comment

"not clever algorithms" - well, I would argue Hidden Markov Models are quite a clever algorithm :)

"AlphaGo is basically just an improvement" - the overall algorithm is the same, the difference is in using a neural net to compute the goodness of a board state

"You could as well call AGI "basically an improvement" over expert systems" - do we even have a definition of AGI?! - if you define it as a universal expert system, yeah, sure

"Go is astronomically more complex than checkers" - it might be more complex in terms of number of possible board states but it's the same category of game - fully observable, discrete and deterministic

"Truth is, the field of AI has advanced an immense lot" - the big question is if it has advanced in the right direction, cause, you know, dirigibles ...

Expand full comment

The word "basically" here does a very bad job.

Building a entire functioning city is "basically" the same as building a shack in woods. You just add more scale, more engineering, more rigor, more time, more systems, more planning.

Deep neural nets are a tool. Markov and Bayesian methods are tools. Symbolic methods are tools. You can do a good job and make steady progress, or you can be on the side sneering.

Expand full comment

We have only a few methods at our disposal, whether neural nets, probabilistic, or symbolic, or mathematical.

Yet, just like architects of buildings and structures, we will make do. Trivializing work being done isn't helping anything.

Expand full comment
Mar 30·edited Mar 30

Blind faith in scaling laws that they don't understand plus the promise of owning the means of production is an irresistible combination. Losing their $100B is the only way that they'll learn.

Expand full comment

The sunk cost fallacy rears its head.

Expand full comment

Scaling laws, alone, can't save anybody. But the effect of scale so far has been huge. Some phenomena are truly complex, and beyond human capacity to code them up. In many cases data has been able to sort things out.

What Microsoft and OpenAI need, and I am sure they are acutely aware of it, is also architecture work. Not hand-crafted symbolic systems, but architectures that can take advantage of compute and data, of which we have plenty.

Expand full comment

What we need is a way interpret these models so that we can algorithmically add symbolic manipulation to the system. To do that we need to understand how the models work.

Expand full comment

We surely need more rigor to current systems, and interpretation of what they do. I don't think Altman and the rest are clueless about that. Statistical prediction, alone, is just that, prediction, and predictions have a failure rate.

Expand full comment
Mar 30Liked by Gary Marcus

A bad thing about this prospect of a plan is the signal it might send to competitors and companies that are trying (in vain?) to capitalize on LLMs, AI, or whatever the name. Money that companies could spend on useful endeavors will probably be squandered. A technology that will likely cost jobs (implemented successfully or not), that treats humans as externalities, that will likely not result in a more stable food supply, quality of life for the masses and so on, sounds to me like a shortsighted investment driven by greed, narcissism, and perhaps idiocy, resulting in a positive feedback loop that won't end well I'm afraid. But if it can happen, it will. These companies have the deeper coffers to steer and manipulate public opinion and buy or nudge politicians.

Expand full comment

Just think about the public transit system all that money could have built---.

Expand full comment

This is a false choice. Public transit makes sense in dense areas, and it is the job of cities to fund it, not of investors. There are now 1.47 billion cars in the world, 1.35 million people are killed in road accidents each year.

Unless we go to Mao's time and everybody rides a bike, cars are here to stay, and their numbers will only go up. Improved car safety is to everybody's benefit.

While, sure, feel free to take the bus, or even bike, or walk, if that works for you, and same for folks in urban cores.

Expand full comment
Mar 30Liked by Gary Marcus

It's states that make the most difference funding public transport, not cities. Do you think Japan, China, and Russia have the excellent public transport they do because Tokyo, Beijing, and Moscow have really good mayors? It also make sense in most areas, just look at Europe and Asia to find plenty of examples.

It's only so inconvenient not to own a car in the US since the auto industry has done everything they can to make it that way, and the whole country has been built up and organised with car ownership being assumed.

In any case, I think the broader point was regardless of whose "job" it is it's money that could be better spent in ways that benefit people, rather than harm us and hamstring the progress of AI research as a whole, and that was just given as an example.

Let's also not pretend public and private are entirely different sectors, considering public-private partnership is everywhere these days and actively promoted by some of the world's most powerful people, like the head of the executive branch of the EU. A ton of the money poured into AI companies, if not in this specific case, comes from governments, especially due to potential military applications.

Money badly spent is always money that could be better spent, and it's always a real choice.

Expand full comment

Our society worked best by putting the eggs in multiple baskets. Public transport for everyone is just not the solution. You do your thing, and others will do their.

Expand full comment

Public transit also makes sense between dense areas. Compare train distribution in the US and Europe. That's just one example.

Expand full comment

Seriously. I come from Europe, where countries are packed into a tiny little continent. Here we have ALL. THIS. SPACE. Where are all the rail lines??

Expand full comment

The size of the European Union is of the same order of magnitude as the US (the EU is about half the size of the US), so 'a tiny little continent' seems a bit too extreme. The EU has about 1.5 times the population as the US. So, density is about 3 times that of the US. If you add the non-EU states like the UK, Switzerland, former Yugoslavia, it gets a bit closer even.

Formally, everything up to the Ural mountains is the continent Europe (making it about equal in size with the US), but in this context that seems unfair to use.

Expand full comment

You're missing the point. First, "tiny little continent" was not meant to be taken quite that literally. Second, you're comparing the US population to the EU population—that's comparing the population of one country to the collective population of an entire continent, so that doesn't really work. It's also about culture and money. Europeans don't have the car culture the US does, nor the 'car capitalism' if you will. That's really the core of the matter. No value judgment, just observation. Personally, I love to drive—on long roadtrips, through the vast open spaces here on the West Coast you'd be hard pressed to find in Europe. But I also love trains and the experience that offers, along with the much greater fairness—equity—that public transportation provides to people.

Expand full comment

The problem is that AVs are, and will probably remain, too costly for most countries. Even if AVs ever become safe, the countries that can afford AVs only account for less than about 250K road deaths a year. So AVs are not the solution until the costs become trivial

Expand full comment

The world is becoming richer, and the cost of self-driving cars will go down. Cars are already choke-full with electronic sensors and software. It is not realistic self-driving cars will eliminate all deaths on the road, of course. Cars are surely one of the biggest ways of getting yourself killed without having a disease, so progress in that area will be highly valuable.

Expand full comment

"Presumably the operating idea here is that (a) $100B will equal AGI and (b) that customers will pay literally anything for AGI. I doubt either premise is correct." It just struck me how important that second part is. If AGI amounts to 'another human' then its value is equivalent to 'another human' and for that we already know 'pay anything for' is nonsense.

Expand full comment
Apr 1·edited Apr 1Liked by Gary Marcus

And if an octillion parameter LLM in 2031 ruins lives in a similar manner to the tragedy of subpostmasters and subpostmistresses (“Mr Bates vs The Post Office”) - in this case, the fault is LLM hallucination - will there be outrage?

Or will critics, this time, because the sunk cost is so deep, have their mouths taped up, their bodies broken and their families and lives torn asunder by a sophisticated army of lawyers, corrupt politicians, troll farms and other thugs (maybe literal thugs, and autonomous machines)?

Time to cash in on this next blockbuster movie idea… does anyone have James Cameron’s number?! 😂

Expand full comment

The whole thing with AI seems like pure hype. No one is investing the kind of money that would be needed to really rip apart the brain and work out how intelligence emerges. Why bother when you can churn out another cruddy LLM and just hype it TF?

Capitalism and market forces killed the search for AGİ as soon as a minimal viable fake became available.

Expand full comment

Quite right. This 'single approach' (token/pixel statistics) is going to fail regarding AGI.

Taking OpenAI's own last published scaling numbers (for GPT3) and using the Winogrande benchmark (some real understanding challenges), I have calculated (quick and dirty) from those numbers that we may need something in the order of ~3,500,000,000,000,000 *times* as much as we currently have. See https://ea.rna.nl/2024/02/13/will-sam-altmans-7-trillion-ai-plan-rescue-ai/ the effect of scaling is something like the log of the log of (the log of) size. Going from 10B to 100B investment is going to do nothing.

Expand full comment

Looking at self-driving cars from the lens of investor ROI is short-sighted, especially coming from somebody who cares so much for societal well-being as yourself.

Self-driving cars are hard. Harder than any hype people predicted. But you have no workable ideas for what could have been done differently.

The progress in self-driving cars is very good. Think of them as the James Webb telescope. It will get done. It will be good. It will be very much worth it. And investors will get their money. The patient ones.

Expand full comment
founding

hi gary; forget about all the ‘hopehype’ and write your ‘positive’ essays. we all need them!

Expand full comment

Gary, I'd suggest a bit more research on self-driving before you write more. Tesla requires a human in the loop currently as do almost all companies except just a few that have obtained permits from the CPUC in California and some other jurisdictions to operate without human drivers. Tesla is pursuing a far more difficult non-sandboxed approach and that effort, while long and arduous, is about to pay off as v12 FSD is just released and by all accounts is 99% FSD.

We can expect permits for no human supervision in the next year or two, at which point the robotaxi revolution will begin in earnest. While I am strongly opposed to AGI I support narrow AI applications like FSD as being in general a benefit to the environment, human safety, and perhaps even productivity (or at least good sleep), while of course we will see a massive amount of job loss even from this narrow type of AI being rolled out globally in the coming decade.

Expand full comment
Mar 30Liked by Gary Marcus

How is it possible that full self-driving vehicles could be better for the environment? I don't see how they would be very different from other electric vehicles, except they would require far greater expenditures in energy and resource consumption in their production and training due to reliance on more complex software and better hardware.

The LLMs that they rely on are disastrous for the environment, and it's hard to see it as a worthy tradeoff when years of their usage have made clear that what we get from them is a worse Internet, worse science, worse education, yet better killer robots, better spam, and better propaganda and misinformation. In addition, they set back AI research considerably and misdirects funding, resources, time, and human effort that could all be better utilised elsewhere, at an almost unimaginable opportunity cost.

The math, statistics, arithmetics, and material science do not seem to work out to me. Self-driving cars must be computerised like other modern vehicles, and further they're electric, and therefore the environment suffers a great and ever increasing toll corresponding to the always increasing demand for resources such as copper, lithium, and REMs. For the supply to keep up with the demand, global material extraction—itself reliant on hydrocarbons—must continue to increase by orders of magnitude. That means more deforestation, more habitat loss, more pollution, more processing, and more extinctions. Further, widespread adoption of technologies that require this intensified assault on the earth risk exhausting the world's supplies of many material resources in the near future.

On top of all this is the enormous social and political costs of the "disruption", which without a doubt is much more than people losing meaningful work that gives them a sense of pride and fulfillment doing something with their own hands in material reality, because the nature of competition over limited, rare material resources always results in conflict and war, as history shows us, and with war comes unimaginable human suffering that seems quite contrary to the goal of technology and AI being used to improve our lives and well-being. World militaries and war also pose the greatest harm to the environment in every way that can be measured.

I was hopeful for the future of AI many years ago, but this isn't it, and that they keep throwing money, resources, energy, and labour at LLMs seems downright mad to me, and like it must be the result of psychology and refusal to admit to error and change course.

Expand full comment

Tesla is not serious. Ditching lidar and even radar was a terrible idea, and Musk is a terrible leader who almost drove Tesla into the ground with his dream of fully automated factory, and now with "human-like" "eyes only" driving.

Waymo has taken the serious, thorough, grown-up road. Go deep, work hard, boast little, then broaden. Tesla will be squashing bugs in their hand-strung approach for decades.

Expand full comment

the empirical facts would suggest the opposite -- here's a good example of just how competent FSD v.12.3 is, with zero disengagements over 45 minutes of very difficult driving: https://www.youtube.com/watch?v=1hSjfmBgI8g&t=870s

Expand full comment
author

You have no sense of how trivial 45 min is compared to what is required.

Expand full comment

Waymo, Cruise, Baidu and others are already operating robotaxis so I guess I’m not clear on the persistence of this “FSD will never be real” meme. It’s already here, just not fully distributed. Baidu recently announced new 24 hour service and is operating in ten cities in China. The Lily pond is halfway to being covered. https://www.prnewswire.com/news-releases/baidu-launches-chinas-first-247-robotaxi-service-302084097.html

Expand full comment

What is required for FSD is more thorough work and lots of patience. The world is complicated, and no magic exists. Need all help one can get, from depth sensors, maps, more real-world data, more simulations of risky conditions, more AI models, more physics, human guidance where need be.

Expand full comment

errr, see my above response. It's here. Now.

Expand full comment

Absence of evidence is not evidence of absence.

Expand full comment

I don't know where to post this.

I wondered if anyone remembered this historical example of AI Fakes from the 1980s and a reference to it.

In the 1980s a Nobel Prize economist Winner Herbert A Simon wrote AI program named Newton that was able to curve fit the motion of planets to a inverse squared law. Simon concluded that his program was as smart as human physicist Isaac Newton. Some physicists remarked that Newton didn't invent the inverse squared law of attraction, he invented mass. Does anyone remember this ai program.

Expand full comment

AI program "BACON" 1987 was written by Herbert A Simon PhD Student Gary L Bradshaw.

https://www.ijcai.org/Proceedings/81-1/Papers/025.pdf

Expand full comment
Apr 2·edited Apr 2

I agree that customers wouldn’t pay *literally* anything for AGI, but they’d probably pay an awful lot. If you could replace any remote worker in your company - from a software tester up to an executive - with a bot that never sleeps and never takes a vacation, what would that be worth?

I have no idea what the cost to OpenAI would be to run a single instance of an AI worker, but if it’s less than (to make up a number) $100,000 a year, I’m guessing they can find enough customers to make a nice profit.

If, that is, it’s actually AGI.

Expand full comment

Colorless green ideas sleep furiously.

Expand full comment