40 Comments
Jun 21Liked by Gary Marcus

Whatever happened to Q*, which was hypothesized as the breakthrough AGI needed? Did anyone ever figure out what Q* actually was and whether it would figure prominently into GPT-5 and beyond?

Anyhow, the more I use GenAI, the more often I think to myself that "this stuff really isn't all that good." I'm currently paying the $20 a month for GPT-4 and Gemini, but I'm not sure for how much longer.

It's also a bit disconcerting how some GenAI pushers respond to my dilemma: They insist that I know nothing about prompting and that I need to find the right use cases. Well, I know enough about prompting to realize that I can often save time and effort by just doing the task myself instead of trial and error to find the right prompt. As for use cases, if I have to work on finding them, as opposed to handling the ones I have, GenAI would distinctly qualify as a solution looking for a problem.

Expand full comment

The whole idea of prompt engineering is totally crazy.

Expand full comment

There is another UI called promptless.

An example of a promptless interface showing AI generating multi-mode interactive encyclopedia articles (Knowledge Clips) can be seen here (1m19s): https://bit.ly/3WuGyxE

Expand full comment

A solution in search of a problem, as my mother would say.

Expand full comment

No true scottsman argument. You "just need to find the right prompt, otherwise it's your fault".

Expand full comment

Haha, I'd already forgotten about Q*. How time flies!

Expand full comment
Jun 21·edited Jun 21Liked by Gary Marcus

"I think you wanna know what our chances are. The lives of everyone depends on someone back there who can not only fly this plane but who didn't have fish for dinner." You know it Gary. Everyone at OpenAI had fish for dinner... good thing you had the lasagna.

Expand full comment
Jun 21Liked by Gary Marcus

gpt-1 release date - 11 june 2018

gpt-2 release date - 14 february 2019 (time between releases: 8 months 3 days)

gpt-3 release date - 28 may 2020 (time between releases: 15 months 14 days, about x2)

gpt-4 release date - 14 march 2023 (time between releases: 33 months 17 days, about x2)

gpt-5 release date, according to the extremely precise algorithm of mine with n = 4 (each subsequent one takes twice as long as the previous one), is 14 july 2028

praise the gods of straight lines on graphs

Expand full comment
Jun 21·edited Jun 21Liked by Gary Marcus

Gary, AI does it again: go to bing and search for: cyrillic characters. It comes back with a list of characters, AI generated, that are not all cyrillic. This is very worrisome. The letters of an alphabet should be objective common knowledge.

Expand full comment
Jun 21·edited Jun 21Liked by Gary Marcus

Thank you for taking the time to go find these past predictions of today. There's that old saying about the "triumph of hope over experience"... this describes the great majority of people who go around predicting the future, particularly when it comes to tech. They've got the imagination of Da Vinci and the memory of a goldfish. Yet somehow, journalists and pundits and politicians keep on taking them seriously.

Don't forget, we're all supposed to be hanging out in the metaverse right now.

Expand full comment

I think the more accurate saying when dealing with VC's and Silly Valley hype is: "A lie can travel around the world before the truth has got its boots on."

Expand full comment

“The flight, realizing it needed a fundamentally different approach path to successfully land at the revolutionary new Airport 5, had to circle back around. Apparently it’s a very large circle… rumor has it they need to hire vast numbers of workers in Kenya and other developing countries to spend time absorbing the deep learning for back-propagation and other higher functions needed as nodes of new kind of intelligent network called an "LLF" (Large Language Farm)…”

Expand full comment

This is like waiting for Frankenstein 5 to replace Frankenstein 4 terrorizing the local town. 🧟‍♂️

Expand full comment
Jun 21Liked by Gary Marcus

I am a little surprised they didn’t label GPT4o as GPT5 and be done with it. This GPT5 expectation is going to be a millstone around their necks for years.

Expand full comment
Jun 21Liked by Gary Marcus

If the scaling laws have plateaued, they'll never release anything called GPT5.

Expand full comment

"Probably for the third or fourth time, if my hunch is correct, with GPT 4-Turbo and 4o perhaps being failed attempts that didn’t seem worthy of the name, with another earlier attempt halting before Sam’s time speaking to the Senate in May 2023." - great insight, now that you've pointed it out, it looks almost certainly the case

Expand full comment

"PhD level." Good grief, as if they've taken care of the elementary-level gaffs already...

Expand full comment

Indeed, Mira Murati was recently quotes as saying that they (OpenAI) didn't have anything in their labs that was a significant step forward.

So maybe not even in 18 months, especially if the evidence from 4T to 4o really does signify that LLMs have plateaued in capability.

Expand full comment

This reminds me... remember in the GPT4 release video how it was supposed to understand pictures well enough to tell you what would happen next? Here it is:

https://youtu.be/oc6RV5c1yd0?feature=shared&t=29

Am I crazy or did this feature never come out? I was really looking forward to tricking it into saying dumb stuff :)

Expand full comment

This feature does work, and it works really well for simple scenarios like the image in the demo. It also interpret memes quite well.

Expand full comment

Thanks! Is it new to GPT-4o or has it been around this whole time?

Expand full comment

I think the image understanding capability before 4o is already quite good, while 4o further improved it. This isn't that surpring because image-text pairs are so abundant on the Internet so its a good fit for unsupervised pretraining. See the response to the glove and ball image below.

It looks like the image shows a scene with a red boxing glove positioned above a seesaw with a blue ball on one end. The boxing glove is mounted on a scissor lift mechanism.

This setup appears to be from a puzzle game where you might need to use the boxing glove to interact with the ball on the seesaw. Likely, the goal is to hit the ball or manipulate it to achieve a specific objective, such as moving the ball to a target area.

If you need specific help with this game or puzzle, please provide more details about what you're trying to accomplish or the rules of the game.

what happens when the glove drops?

ChatGPT

When the glove drops in the depicted setup, it will likely strike the seesaw. Depending on the force of the impact and the positioning of the glove, several outcomes are possible:

The Blue Ball is Launched: The force of the glove hitting the seesaw could catapult the blue ball into the air. The trajectory and distance the ball travels would depend on the angle and force of the hit.

Seesaw Movement: The seesaw will tilt sharply in response to the impact. If the glove hits with significant force, it might send the ball flying off the seesaw.

Balance Change: If the hit is not strong enough to launch the ball, it might just cause a significant movement in the seesaw, potentially causing the ball to roll off the end.

In many puzzle games, such setups are used to test the player's ability to control and predict the physics of moving objects. The exact reaction would depend on the specific game mechanics and physics engine used.

ChatGPT

The image you uploaded is a meme posted by Elon Musk, depicting a satirical take on how Apple and OpenAI use data. It shows a man and a woman drinking from the same coconut with straws, labeled "Your Data." The woman represents Apple with "iPhone" written on the coconut, and the man represents OpenAI. The text at the top reads "How Apple Intelligence works."

The implication is that Apple, through its iPhones, collects user data and shares it with entities like OpenAI. This reflects concerns about data privacy and how companies handle personal information.

If you have any specific questions about this image or need further analysis, feel free to ask!

Expand full comment

The opportunity cost in all of this is mind bending. We must invest in genai or we will be left behind, they tell us. Meanwhile, the benefits aren’t materializing, and the tech that was supposed to solve climate change is going to take 9% of our electricity in 5 years.

Expand full comment

It’s impossible to please someone who is intent on criticising.

If OpenAI had fired out GPT-5 within a year of GPT-4, you’d be up in arms about their recklessness and lack of proper concern for building responsibly and giving humanity a chance to keep pace with the technology.

If they go slowly, you write a silly post like this one.

It’s very easy to be a criticiser (note: very different from a critic). You’ll never go out of work with your attitude Gary.

Would be more interesting to see you create something meaningful for the world instead.

Expand full comment
author

If 5 was out by now and as good as people imagined I would be out of job. If Santa Claus was as good as they say…

Expand full comment

1) That doesn’t make any sense at all.

2) People who are intent on criticising will always find something to criticise, and thus never be out of a job.

Expand full comment
Jun 21Liked by Gary Marcus

I have been reading Gary's writings for the past one year. I would not say Gary intends on criticizing. There were several earlier posts from Gary where he praised recent papers where researchers described efforts on discovering the inner working of LLM. Honest science is about discovering the inner working of a phenomenon (LLM and next token prediction in this case). Engineering the hell out of a blackbox (LLM) without basic scientific understanding behind it is irresponsible, wasteful, and potentially dangerous. That is what Gary is trying to point out.

Expand full comment
author

i also praised Cicero and wish we saw more stuff like that

Expand full comment

Is this Gary saying his full time job is OpenAI criticism?

Expand full comment
Jun 22·edited Jun 22

I’ve also been reading his work for some time now. About 18 months. I find it to be polarising in the emotive and very selective way the themes are presented and structured.

It’s that polarisation that I find to be non-helpful. In the extreme.

Just as it sounds like you are YZ, I’m very motivated for the discussions around generative AI to be balanced and nuanced. It concerns me that someone with this platform doesn’t use it as such.

I have a consultancy which is based around presenting the full dialogue of generative AI technology to SMBs and their leaders. That includes a comprehensive canvassing of the potential of the tech as well as always maintaining a focus on the risks (of which I say there and many, of a really serious nature), and how to use the tech responsibly and ethically.

I firmly believe that it’s only through balanced and nuanced discussion that we’ll collectively progress the technology to something that’s widely beneficial, as well as safe.

Nuance and balance is what the business community needs right now from AI commentators.

Not snarky, antagonistic, disingenuous rants, that 7/10 times focus on people (mostly Altman and LeCunn) rather than the tech and its larger themes. For someone who claims to be a scientist, most often this blog reads like a disgruntled AI gossip column.

That’s polarising. And that’s what I wish Gary would stop, and become more aspirational for his platform. For the benefit of all.

If you’d like an example of what I consider highly intelligent, nuanced discussion, Engines Of Engagement by Stodd, Schatz & Stead is an excellent read.

Expand full comment

This is a Substack post, not a research paper. For nuance and balance, research papers may be a better source. Everyone has his/her own writing style. I think we need to look past the style and focus on the substance. This particular post from Gary is about GPT-5 not coming as LLM believers have been hoping and clamoring. I think it is a healthy counterweight to the 24/7 "GPT/LLM/AI" media fanfare and outrageous and sometimes flat lying from the "AI" makers on their future trajectories. Business leaders should benefit from reading this post if they want to steer clear of potential wastes on GPT/LLM and GenAI. I only see middle management trying hard to incorporate LLM into their existing workflow to show their "worth" in the corporates, and yet their end users slamming the results as useless garbage, after spending time and money for the implementations. You will do the business leaders a favor by advising them to take a wait and see approach on this current "AI" cycle, as opposed to taking a blind dive in a rush for FOMO.

Expand full comment
Jun 21Liked by Gary Marcus

I would not call Gary's writings as criticisms at all. They are simply honest and objective observations and analysis on the state of affairs of the current "AI" industry. They may sound objectionable and dispiriting to people who are in love and in awe of LLM, but they are still honest and true statements to people who can see through the facade and smoke screen. In an irrational hype and frenzy, rational thoughts and writings are indeed contributions.

Expand full comment

I agree with Mark - its disingenuous to say that once a significant breakthrough toward AGI had been achieved that it was straight-line path up from there. What has happened within the last couple years when the AI community has been working on this problem for 60+ years should be celebrated and encouraged, not dissed.

Gary Marcus - have you no shame?

Expand full comment

Your allusion of GPT/LLM as a "significant breakthrough toward AGI" is questionable at best. As I see it, GPT/LLM has no role whatsoever in the eventual shape of AGI, if and when we get there. The foundation of current DL/LLM being backpropagation, has not changed for many decades. What had happened in the last couple of years has been 10-20 years in the making, in terms of gradual algorithm improvement and refinement from many researchers, but most importantly has been due to significant increase of hardware compute power, and vast and easy data availability, a large portion of which whose copyright legality is questionable. Gary's writings are simply objective observations of the current "AI" landscape, what shame are you talking about?

Expand full comment

There are precious few new ideas out there. Symbolic logic is a few hundred years older than neural nets, for example.

The most plausible path forward looks, for now, to do learning by imitation, followed by invocation of specialized modeling algorithms.

Which not unlike how people do things. We operate by rules of thumb derived from experience, but we know when to fine-tune our approach depending on the particulars of the situation.

Expand full comment

That isn’t how people do things. We have general knowledge and common sense, much of that flowing naturally from being a corporeal being.

People don’t have to trial and error, “don’t drive into solid objects.” We work differently from LLMs, and failing to acknowledge that is a mistake.

Expand full comment

I wrote "not unlike". Indeed, people don't have to bump into walls a lot before they learn how to avoid them.

What I am trying to say is that people don't operate based on strict algorithms. The knowledge we have comes from experience, and we derive heuristics for how things work. We also know when to validate and refine strategies, which chatbots aren't great at.

Putting a vast amount of data in the pot and letting the chatbot figure out some patterns out of it is not a bad first approach. Later, when actual embodied robots can roam around, they will likely collect additional data that will enable them to improve.

That won't be enough, I think, to get a fully reliable robot, but one thing at a time.

Expand full comment
Jun 21·edited Jun 21

You sound like Lady Catherine de Bourgh from Pride and Prejudice 😂

Expand full comment

This is a silly game being played here, both by you and by folks who dearly await the next iteration.

Scientific progress does not work that way.

Continued improvements in chatbots will require attention to detail, specialized modeling, agents that can run tools, inspect work, and iterate.

We'll see that in incrementally better products by OpenAI, Google, Anthropic, etc.

Expand full comment