101 Comments
Sep 25Liked by Gary Marcus

Eventually the con will be revealed and the bubble will burst.

Expand full comment

On the verge of bursting already. Half of my layoffs were due to bubbles, and like clockwork the job hunt is tough at first, then it picks up steam as the technology that got me fired fails. Then as the lack of experienced tech staff starts to bite hard the recruiters start calling and begging. Had five job interviews in just the past few weeks, and almost no response to job applications before that.

Expand full comment

Not really, what people and this so-called expert who wrote this article failed to understand is that open AI is made to research these technologies, and it did. At most, it will be absorbed by Microsoft the biggest investor.

It's interesting companies band together to make base research on something, instead of stealing from some college or government.

But every single "ia" comes from ChatGPT. So this is not a failure, it's a success, if this will become a company who knows, but it did what was meant to do. It's normal for people to move on.

Expand full comment

Not sure if it’s really coming from OpenAI. From what I can see most of the discovery research has come from PHD programs like at Stanford (under Dr Fei-Fei Li) and MIT, and these individuals are moving from PHD programs into these companies. There’s a lot more going on beyond LLM and ChatGPT.

Expand full comment

Yes, things related to machine learning, LLM, and so on come from everywhere, like all knowledge. And they definitely stole as much as possible, as they would never invest in something uncertain. But you can see (or at least could) the same errors and bugs from ChatGPT in all other "IA" from companies and countries, in bots, and any similar programs. Too many coincidences, to specific nomenclature. Not to mention others who reported the same.

Expand full comment

The bubble will burst for some companies, for sure. Meta, Grok, Mistral seem to be me-too efforts. OpenAI is well-established by now, it has lots of revenue (also a lot more expenses). Google will outlast everybody.

What is important to note is that the tech is getting better. Reasoning, even if by example, is a huge deal. Being able to call existing software will be a huge deal and doable. More high quality recipes will help.

Expand full comment

The issue all these companies will face is the supply and demand for GPU’s. What will help change this, will the adoption of Liquid neutral networks help?

Expand full comment

Liquid neural nets are still a niche thing as I know. GPU and resource constraints may force them to think harder about architecture.

Expand full comment
Sep 25Liked by Gary Marcus

This is hubris.

We live in a world where this attitude has become the norm.

More often than not, this attitude stems from a position of privilege.

A startup that has absorbed most of the attention and, let's face it - resources has a live saga on social media.

Every few weeks, something happens at OpenAI.

And that's not a problem, but the "actors" are tweeting their plans and decisions. They leave, come back, leave, and start a new startup...

All this is delivered as it is happening, and we should somehow care about it.

If you'd like to leave, tell the people who should know.

This is not a PSA.

You don't need an audience for this to validate your decision.

Is it just me wondering about this?

I'm sorry, I can't hold this off.

Expand full comment

Most certainly not just you Alin. It is rather wearying to be constantly reminded of that certain class of people who have anointed themselves with digital divinity and that we should be ever grateful for the privilege of witnessing their uber-privileged lives as they unfold minute by minute, even as they trample upon our right to live ours the way we desire. And the Media, complicit.

Expand full comment

Digital divinity: Sam Altman with a divining rod, divining for dollars.

Expand full comment

“Dig over here, I’m getting a strong signal from the schtick” — Sam Altman to Satya Nadella

Expand full comment
Sep 25Liked by Gary Marcus

Say what you like about academia, when it comes to getting a grant, people pore very carefully over your work to see if it's feasible and sensible before they part with any money. It seems like the standards in the commercial world are much lower; money is thrown around based on bullshit. I don't object to capitalism, far from it, but something is wrong here. It's not just AI, it's the same in my own field of medical imaging.

Expand full comment

In the financial investing world (which I am beginning to research) it's called "sentiment" and it largely what seems to drive investing. In other words, it's largely not rational.

Expand full comment

Indeed. The bandwagon effect probably accounts for a large minority of investment decisions - mostly driven by FOMO and misaligned incentives i.e. low to no consequences for the person making the investment decisions.

Expand full comment
Sep 25·edited Sep 25

yeah, I get that, I'm just not certain I understand why lots of people would be irrational (or even just suboptimal) with very large sums of money. Didn't e.g. Jim Simons show that a rational approach works consistently well? It's not my field that's for sure.

Expand full comment

Why? Because everyone knows this, and thinks they can "greater fool" their way to profit.

Expand full comment

It's too soon for me to say what percentage is sentiment versus rationality, and if it's more rational with large investors; and yes, a rational approach that also follows intuition (got feeling) works vastly better in the long run. As far as the "why", it's FOMO and regret and other fear and desire driven mechanisms.

Expand full comment

FOMO works I think. But it was never very rational to invest so heavily in LLMs when their deficiencies were known from the start. Investors can surely pay someone to tell them such things. Maybe it's that they want to own the company that develops a monopoly and are willing to take large risks on that basis.

Expand full comment

Even in academia, grantmakers often rely primarily on proxy cues, such as credentials or prestigious affiliations, rather than looking at the work itself, which becomes secondary.

Expand full comment

Yes, the applicant's qualifications, previous track record, and the environment where the proposed research will be carried out are all extremely significant parts of a grant and take up a portion of the application. That's part of what makes it not bullshit. You are more likely to get a grant if you are a senior professor in Cambridge than if you are not, even if the science is otherwise identical. Both the science and the person are explicitly assessed. You need to read the instructions before you write the grant to avoid disappointment. A good strategy is to write it as a co-investigator with someone else who is already experienced.

Expand full comment

VCs are not a hive mind. They always look for new opportunities others missed. If you make a solid case, you have a shot.

Expand full comment

"people pore very carefully over your work to see if it's feasible and sensible before they part with any money."

Not in medicine they don't. The vast majority of published research is useless makework because people need publishing on their CVs. Including, formerly, the people approving the grants.

Expand full comment

Yeah, well, medicine is corrupt, especially in the US. I don't at all trust doctors anymore, especially knowing some of them. Pre-clinical medical science is a bit less corrupt, at least at the high end. I'm writing a grant right now, in fact it's a grant just to gather pilot data to have enough evidence to get a larger grant later on. If you are writing a grant, for example, to the UK medical research council you can expect a thorough review, whether you're a clinician or a scientist. I'm pretty sure the standards at NIH are high too, although I have only worked there rather than applied for a grant. It's not perfect but it's not pissing away billions on AI bullshit either.

Expand full comment

Well, I can tell you from experience: If you're trying to get funding for a smaller project, they make sure you've done your homework, and they're very likely to get their money's worth. The bigger the investment, oddly enough, the less due diligence.

It can be impossible to prove, with a big project, that it will pan out. Which leaves you, as Eric says, with sentiment. The critical ingredient of sentiment is previous success. Celebrity and academic credentials are good, too.

Expand full comment

It's hard to know what "pan out" actually means in this context. My experience is that funders tend to have quite close ongoing relationships with fundees. "pan out" I guess means publish a lot of good papers, and the time to check on that is at the next application. If you are perceived to screw up a 5 million dollar grant you won't get another. That does happen. These days patents and monetisation of results are getting more attention, which I feel good about personally. We're interested in that too, and so is my university, more so since 2015. Many of our grants, including the largest, are fellowships which are about the person as much as the specifics of the research plan. It's about both though. The level of scrutiny on all this stuff seems way, way higher at all stages than VC stuff, especially per dollar spent. The poor quality of companies has inspired the interest in commercialisation actually.

Expand full comment

My most recent job was in academia.

The board and CIO in particular (a complete tech illiterate management consultant type) were all in on Waymo and AI nonsense. There was an actual mandate from upper management to 'Find cost savings for the university using AI technology.'

Expand full comment

Well, there is certainly a large difference between the university administrative staff (is that what you mean by "the board"?) and the actual scientific process of peer reviewing grants and papers, but that is another ongoing story. Academia has many problems, but they are not exactly the same problems as this.

Expand full comment
Sep 26·edited Sep 26

Yes I was referring to the administrative staff.

In all my many years of working in IT I've never encountered such a messed up organizational structure or bad politics. To be fair the staff doing the actual work of educating and taking care of the students were lovely. If the university had targeted it's layoffs at the top end of the org chart instead of the bottom end they might survive the coming enrollment apocalypse. Private liberal arts university in a very expensive city that was third or fourth choice for most of the enrolled students. If they stay the course they're doomed.

I have a genius for picking employers that get themselves into dire financial straits. :(

Expand full comment
Sep 25Liked by Gary Marcus

Investors are herd animals. Whatever the big new thing is gets all the attention, and a lot of smaller, interesting projects get forgotten about.

That's in ordinary times. When there's a BIG BIG BIG new thing, as now, it can be pretty catastrophic.

It's really too bad, because the whole point of venture capital is to identify and fund innovative projects and help them get off the ground. This is going to mean a loss of funding for small innovative projects for what? Ten years? :(

Expand full comment
Sep 25Liked by Gary Marcus

These magazine covers are starting to look like high-glamour mugshots...

Expand full comment

Starting?

Expand full comment

You're right they really should end

Expand full comment
Sep 25Liked by Gary Marcus

The harm is palpable. The

Dear Mr. Altman,

Your recent article ( https://ia.samaltman.com/) promoting the benefits of AI paints a rosy picture of a future where everyone can harness the power of AI to amplify their abilities and create like never before. However, this utopian vision is marred by the very tool that is supposed to enable it: the prompt box.

By relying solely on the user's ability to craft effective prompts, the prompt box erects a formidable barrier between those with extensive vocabularies and those without. It creates an echo chamber where those with limited language skills are trapped within the confines of their existing knowledge, while those with richer vocabularies can explore a vast sea of information and ideas.

This inherent flaw in the system has far-reaching implications, effectively creating a knowledge apartheid that perpetuates and exacerbates existing inequalities. As you continue to promote the benefits of AI and raise capital for your company, it is crucial to acknowledge that the current state of AI technology not only concentrates wealth in the hands of a few but also leaves behind a vast majority of people with limited vocabularies, denying them the opportunity to fully participate in the knowledge economy.

If AI is to become a tool for everyone, as you claim, it must transcend the limitations of the prompt box and provide equitable access to knowledge for people of all linguistic backgrounds and abilities. Otherwise, the promise of AI will remain a distant dream for many, and your claims of AI's democratizing power will ring hollow.

We urge you to recognize the external harms caused by this flawed system and address them with the urgency they deserve. Ignoring these issues will only serve to undermine the credibility of your vision for a future powered by AI, and raises questions about whether the industry is truly committed to creating a more equitable and inclusive world.

Read a detailed description of how to mitigate the harm using semantic AI. This technology follows Gary's description in Chapter 17. of his new book.

aicyc.org

Expand full comment

There is indeed "a formidable barrier between those with extensive vocabularies and those without", but that has nothing in particular to do with AI.

And "those with limited language skills are trapped within the confines of their existing knowledge, while those with richer vocabularies can explore a vast sea of information and ideas" is also true, and also has nothing in particular to do with AI.

Expand full comment

Everything to do with algorithms that are based on words not concepts Both search and LLM require words to steer the algorithms. But that isn’t the only AI. Semantic AI is built on concepts and can guide users to information without words. You are locked in your knowledge bubble. aicyc.org shows examples.

Expand full comment

That seems even more dystopian. At least I can argue with, and correct, a language-based model when it's clearly returning propaganda.

That model will just disappear things down the memory hole entirely.

Expand full comment

Not unless you are in your knowledge bubble Millions of concepts are beyond your knowledge. A semantic AI model at the same knowledge scale as LLM can guide users with immediate fact checking and context. There are billions of internet users that are not privileged as you and I are but you are in knowledge poverty regarding many areas. Semantic AI is one aicyc.org

Expand full comment

Thanks for another insight into more 'whack-a-mole' of stochastic LLMs. I hadn't thought about the exclusivity created by prompts wrt to language.

There are ongoing issues when a language model is not fit for purpose requiring constant 'fixing'. When the world has access to a symbolic language independent representation like ours, that extracts 'meaning' these problems don't occur in the first place.

Expand full comment

Semantic models are promptless. They encourage browsing. Search and LLM making browsing nearly impossible.

Expand full comment
Sep 25Liked by Gary Marcus

Getting, and staying, high on your own hype. Neat trick. Amazing how so many others have convinced themselves they, too, are getting a contact high from being anywhere near it. Now, if only all of us could do the same...

Expand full comment

3 billion dollars of revenue per year means they have a real product people find useful enough to pay for, even if free versions exist. Of course they burn a lot more, but that is normal early on.

Expand full comment

Btw, early on losses are no guarantee for later profits even if huge later profits often started with early losses

Expand full comment

Early losses and early revenue is not a guarantee of future success, of course. But it does show the product is real. Unlike the con Elizabeth Holmes and the crypto dudes.

Expand full comment

While true in general, is this still 'early on'? GPT 3 which burst on the scene as ChatGPT was realised in 2019 (2-3 years after the inspired brainwave at Google that is 'transformers')

Expand full comment

Amazon didn't make a profit for 15 years.

Expand full comment

But they didn’t have any competition for years.

Expand full comment

True, but not everything that makes losses becomes an Amazon (which optimised/disrupted a very old fashioned business, retail, btw).

At this point, we do not know how actually GenAI is going to pan out. For my money, nothing AGI-like (I'm not afraid of programmers being replaced, IT is far too brittle for the fundamental unreliability of GenAI), but we do get a new category of 'cheap' (in both meanings) in some areas. We may increase productivity of professionals (including coders), but how big that effect will become we do not know and so far it is very limited. We may get a lot of 'cheap volume' in certain areas, and there is money in that, of course, but enough? I have no idea.

We simply do not know if it GenAI will be 'good enough' for the revenue to cover the cost. It might as well be another Concorde-like situation in the end.

Expand full comment

OpenAI is still in the phase where the tech is ramping up, and so are losses. It would be a concern if the tech wasn't being used or if it was not improving.

Expand full comment

It might not need to improve much to disrupt. But we might not like the results as humans when we drown in GenAI created 'cheap' content. Where is Sora by the way?

Expand full comment

I think AI content is just the first phase. The real value is in bots doing office work. That's what companies will pay for. Sora is a curiosity, I think. Neural nets in combination with other methods will be able to one day produce realistic video without hallucinations, but I don't think that will be as disruptive or in demand as task-executing bots.

Expand full comment

“Yet people are valuing this company at $150 billion dollars.”

The key here is “people.” Ask yourself “which people” and you can figure it out. The same people in the same echo chamber.

Every decade has produced the exact same crop of hucksters and charlatans and conmen since pretty much the beginning of time. From bridges to leverage buyouts to bitcoin to blood tests to bots, It’s the same story. Some days they walk on Wall Street other days they walk on Main Street, for a couple of decades they’ve been walking through Silicon Valley. They have simply lit capital on fire for decades. And centuries. It’s all just a big game to them. And even when some go so far as to warrant going to jail, the trail of destruction they leave behind is quickly forgotten because humans don’t have long memories. The cycle will repeat itself because it always has.

Thankfully, there are still some good guys out there working diligently in obscure labs and small firms, doing real work that will someday do more to change the world than a search engine or an advertising algorithm. Keep your chin up. The really good uses of AI wouldn’t thrive in these kinds of deepfake companies anyway. They exist out of the spotlight and will make their mark well enough. One more life saved in a hospital CT machine or one more child educated by a homegrown anti-dyslexia algorithm and the good work gets done.

Expand full comment

I wanna agree with you.

But I also had a fantastic jag in 1996 about how the internet was snake oil.

Expand full comment

While I agree with many of the views and sentiments expressed here, I'm not sure if the implied comparison with a [fill in psychological labels here] of the likes of Elizabeth Holmes and SBF entirely holds water here, as far as the level of train wreck: degree of deception, criminality, and other ignorant actions. I'd say it's a different kind of train problem. We shall see...

Expand full comment

I'm a tad more radical than Prof. Marcus on this, but there's a problem here: the underlying technology (LLM) on which all this hype is based is exactly and only a random text generator.

This technology has no mechanisms, means, theory, or anything else for relating the text it reads or generates to actual physical reality in the real world. Google's lastest thing, on it's first day out, announced that Demis Hassabis* was the CEO of Google. Hilarious, yes. But deeply sad. And demonstrative of the inherent stupidity behind this whole show: these things really have no idea of what the words they generate mean.

So the people telling you that AI does _meaningful_ things, when it actually doesn't, really are in the same class as E. Holmes and SBF.

*: DH is one kewl bloke. When he earned the title Candidate Master in chess, he was the youngest person in history to have done so. So he's as smart as it gets. So maybe he's planning on taking over Google, and Google's AI figured it out...

Expand full comment

LLMs already do meaningful things for me in my daily work.

Expand full comment

Practical, for sure, but are you more productive? I fell for the hype of ChatGPT and signed up for the overpriced subscription. But after using it for a while and comparing it to X.com Gork (already subscribe to x), I canceled my ChatGPT subscription because ChatGPT didn’t provide much beyond what Gork 1.0 provided, and now we have Gork 2 and more to come. There’s a lot of competition, and ChatGPT is like the Netscape browser (the first to come out), but I don’t think it will be the winner.

I think this is why openAI has pushed its way into Apple. It helps make it sticky. But Eventually educated consumers are going to realize that all the sensors in iPhones and Macs have become a public surveillance arm for governments - passing on all your private information (listening to everything around your phone, tracking you, and health information, images, text messages, Emails) and aggregating that information to open GMO’s, and Governments. You can be targeted in many ways if you don't align with their thoughts or actions. I think this is part of what Elon and others have realized with Sam’s mission.

Expand full comment

To answer your question: Yes. Very much more productive in a particular use case at work.

It is true that OpenAI doesn't have any special secret sauce. But I'm not paying the bill. And our prompting is optimized to it to return outputs that suit us. So for now it's them.

Expand full comment

This is my take as well. Sam Altman is a straight up huckster. The messaging coming out of OpenAI regarding o1 is irresponsible BS (as was their messaging regarding GPT-4 when it came out). Waiving around how well it scores on reasoning tests designed for humans should be beneath them; they do it because they know that there are millions and millions of gullible AI optimists out there who really want Star Trek to come true and will gobble up all the deceptive marketing the tech industry tosses out at them.

I'm being uncharitable, I know, but it's been almost two years of this nonsense and I my sense of charity is all used up. The difference between Sam Altman and Elizabeth Holmes is that Sam Altman at least has a product with *some* useful applications, while hers had none. The similarity is that neither's product is capable of what they claim.

Expand full comment

Been saying this for a while: The knowledge to be gained here is about human intelligence, not the artificial kind.

Having said that, while this wave of AI is going to hit its own wall on the road to AGI (like 'combinatorial explosion' in the past), and o1 being more an illustration than a counter argument for that, GenAI is still probably going to disrupt (by being 'cheap').

But the investors seeing an opportunity for 'getting in at the start of the next near-monopoly' (like Google, Amazon) or that suffer from 'fomo' are going to get hurt. $150 billion is indeed insanity. But we're all likewise vulnerable. We're all potential flat earthers. (see my first remark).

I'm not 100% convinced there aren't some moats (mostly barriers to entry because of training, real/synthetic data, etc. and proprietary engineering the hell around limitations.)

Expand full comment

I pray there are moats preventing AGI. Nearly the first thing, if not the very first thing, AGI would be used for is to kill humans (military applications AKA state sanctioned murder). Just imagine what the situation in Gaza would be like if Israel could deploy a fleet of autonomous killbots that would never question orders no matter how horrific, and can be sent in mass quantities as fast as they can be built.

Expand full comment

We should be more afraid of humans employing dumb but powerful tech unethically, than AGI (which is not on the horizon, never mind close).

Humanity sometimes tries to regulate this (like for instances regulations that govern war (war crimes) and if you act genocidal, you may end up in prison (if you lose...). We're not very good at it, though.

Expand full comment

We don't need AGI to get "autonomous killbots" - programming a machine to kill can be done now. And something that never questions orders no matter how horrific would by definition not be AGI, at least by the "human-level thinking" standard most people use.

(I also disagree that this is the sort of thing Israel would do; pure indiscriminant murder is more Hamas's bag, but this ain't a politics blog)

Expand full comment

Your writings are almost always very insightful and I always learn from them. However, some of them seem misguided by emotion or a desire to elicit one on the reader or, much worse, on the desire to be proven right or to vindicate something. This post is a good example. This post doesn't give any new information, insight or opinion. Seems better suited for X. It does, however, make an association that I find troubling. You are right about many things about OpenAI. Better yet, for the sake of argument, let's say that you are right about all of it. Is it helpful for your argument to associate OpenAI with two proven criminal enterprises? Seems more like an ad hominem attack than anything else. Is it necessary to prove what you have already said about that company? Is it helpful? You may say that the comparison is just about the "fake it 'till you make it" approach, but if that was the intention, you definitely could've written a more extended and balanced post.

Expand full comment

For what it's worth, I interpreted Gary's comparison to be "look at the tech BS artists Forbes puts on their covers", not "look at the tech criminals Forbes puts on their covers". But I can see the ambiguity.

Expand full comment

If ever there were a sign that the wealthy aren't taxed enough this is it. Multi-multi millionaires, billionaires, and large corporations have so much money coming in they can't think of anything better to do with it than throw it onto bonfires like this.

God forbid they use their COVID windfalls to actually contribute to the communities they live in through charity and support the employees that made them wealthy in the first place.

Expand full comment

The similarity between Theranos and OpenAI? Both technologies relied on over coming scientific hurdles first - the laws of nature.

By putting engineering or tech *before* the open biological and functional neuro-science problems were solved, they had to rely solely on workarounds and scaling.

When you butt heads with laws of nature, there is only one outcome.

Expand full comment

The history of conversational agents from ELIZA through Loebner Prize (Turing test) contestants and other ~AGI efforts, intelligent assistants, task-focused chatbots, and today's is that many people like many of them, but won't pay enough to offset the cost. IMO generative AI does not present barriers to people with limited vocabularies: It's ability to infer the intent of a question however asked is stunning. It presents a barrier to people with limited skills in applying critical thinking to selecting appropriate topics to address and assessing responses. These skills can be learned. IMO sufficient revenue will not materialize, exacerbated by mounting demands on energy and water resources, which will also afflict hybrid models. We'll see.

Expand full comment

OpenAI is not a train-wreck. They just closed a new round of funding and showed a very notable improvement in reasoning. They are still the leader in the field, but I fully expect Google to have a notable release before year-end.

OpenAI is becoming a normal company, and Altman a normal businessman. Must have revenue and control. That's how things work.

Expand full comment
author

i don’t think the round is closed, and some investors are going to be pissed that Sam didn’t give them a hands up about Mira.

Expand full comment

We will see. I think Ilya and Brockman leaving were bigger deals. Mira is more of a manager, not a mastermind.

Expand full comment

The moat is the technical excellence and real-world usage. Meta is an also-run me-too wannabe, with negligible impact.

Expand full comment

The real challenger isn’t meta, but AWS using llama3 and other models to drive cloud hardware spend. For many applications the OSS models are already good enough

Expand full comment

Apple choosing to outsource Apple Intelligence is the death of the co. Jobs is prob throwing iPhones in the afterlife

Expand full comment