33 Comments
User's avatar
PT Lambert's avatar

Whatever happened to Q*, which was hypothesized as the breakthrough AGI needed? Did anyone ever figure out what Q* actually was and whether it would figure prominently into GPT-5 and beyond?

Anyhow, the more I use GenAI, the more often I think to myself that "this stuff really isn't all that good." I'm currently paying the $20 a month for GPT-4 and Gemini, but I'm not sure for how much longer.

It's also a bit disconcerting how some GenAI pushers respond to my dilemma: They insist that I know nothing about prompting and that I need to find the right use cases. Well, I know enough about prompting to realize that I can often save time and effort by just doing the task myself instead of trial and error to find the right prompt. As for use cases, if I have to work on finding them, as opposed to handling the ones I have, GenAI would distinctly qualify as a solution looking for a problem.

Expand full comment
Richard Self's avatar

The whole idea of prompt engineering is totally crazy.

Expand full comment
George Burch's avatar

There is another UI called promptless.

An example of a promptless interface showing AI generating multi-mode interactive encyclopedia articles (Knowledge Clips) can be seen here (1m19s): https://bit.ly/3WuGyxE

Expand full comment
Amy A's avatar

A solution in search of a problem, as my mother would say.

Expand full comment
JonnyMadFox's avatar

No true scottsman argument. You "just need to find the right prompt, otherwise it's your fault".

Expand full comment
Ben P's avatar

Haha, I'd already forgotten about Q*. How time flies!

Expand full comment
Purnima Gauthron's avatar

"I think you wanna know what our chances are. The lives of everyone depends on someone back there who can not only fly this plane but who didn't have fish for dinner." You know it Gary. Everyone at OpenAI had fish for dinner... good thing you had the lasagna.

Expand full comment
pale ink's avatar

gpt-1 release date - 11 june 2018

gpt-2 release date - 14 february 2019 (time between releases: 8 months 3 days)

gpt-3 release date - 28 may 2020 (time between releases: 15 months 14 days, about x2)

gpt-4 release date - 14 march 2023 (time between releases: 33 months 17 days, about x2)

gpt-5 release date, according to the extremely precise algorithm of mine with n = 4 (each subsequent one takes twice as long as the previous one), is 14 july 2028

praise the gods of straight lines on graphs

Expand full comment
Costa's avatar

Gary, AI does it again: go to bing and search for: cyrillic characters. It comes back with a list of characters, AI generated, that are not all cyrillic. This is very worrisome. The letters of an alphabet should be objective common knowledge.

Expand full comment
Ben P's avatar

Thank you for taking the time to go find these past predictions of today. There's that old saying about the "triumph of hope over experience"... this describes the great majority of people who go around predicting the future, particularly when it comes to tech. They've got the imagination of Da Vinci and the memory of a goldfish. Yet somehow, journalists and pundits and politicians keep on taking them seriously.

Don't forget, we're all supposed to be hanging out in the metaverse right now.

Expand full comment
Glen's avatar

I think the more accurate saying when dealing with VC's and Silly Valley hype is: "A lie can travel around the world before the truth has got its boots on."

Expand full comment
Eric Cort Platt's avatar

“The flight, realizing it needed a fundamentally different approach path to successfully land at the revolutionary new Airport 5, had to circle back around. Apparently it’s a very large circle… rumor has it they need to hire vast numbers of workers in Kenya and other developing countries to spend time absorbing the deep learning for back-propagation and other higher functions needed as nodes of new kind of intelligent network called an "LLF" (Large Language Farm)…”

Expand full comment
Matthew Ferrara's avatar

This is like waiting for Frankenstein 5 to replace Frankenstein 4 terrorizing the local town. 🧟‍♂️

Expand full comment
Digitaurus's avatar

I am a little surprised they didn’t label GPT4o as GPT5 and be done with it. This GPT5 expectation is going to be a millstone around their necks for years.

Expand full comment
Aaron Turner's avatar

If the scaling laws have plateaued, they'll never release anything called GPT5.

Expand full comment
Roumen Popov's avatar

"Probably for the third or fourth time, if my hunch is correct, with GPT 4-Turbo and 4o perhaps being failed attempts that didn’t seem worthy of the name, with another earlier attempt halting before Sam’s time speaking to the Senate in May 2023." - great insight, now that you've pointed it out, it looks almost certainly the case

Expand full comment
David Hsing's avatar

"PhD level." Good grief, as if they've taken care of the elementary-level gaffs already...

Expand full comment
Richard Self's avatar

Indeed, Mira Murati was recently quotes as saying that they (OpenAI) didn't have anything in their labs that was a significant step forward.

So maybe not even in 18 months, especially if the evidence from 4T to 4o really does signify that LLMs have plateaued in capability.

Expand full comment
Ben P's avatar

This reminds me... remember in the GPT4 release video how it was supposed to understand pictures well enough to tell you what would happen next? Here it is:

https://youtu.be/oc6RV5c1yd0?feature=shared&t=29

Am I crazy or did this feature never come out? I was really looking forward to tricking it into saying dumb stuff :)

Expand full comment
anon's avatar

This feature does work, and it works really well for simple scenarios like the image in the demo. It also interpret memes quite well.

Expand full comment
Ben P's avatar

Thanks! Is it new to GPT-4o or has it been around this whole time?

Expand full comment
anon's avatar

I think the image understanding capability before 4o is already quite good, while 4o further improved it. This isn't that surpring because image-text pairs are so abundant on the Internet so its a good fit for unsupervised pretraining. See the response to the glove and ball image below.

It looks like the image shows a scene with a red boxing glove positioned above a seesaw with a blue ball on one end. The boxing glove is mounted on a scissor lift mechanism.

This setup appears to be from a puzzle game where you might need to use the boxing glove to interact with the ball on the seesaw. Likely, the goal is to hit the ball or manipulate it to achieve a specific objective, such as moving the ball to a target area.

If you need specific help with this game or puzzle, please provide more details about what you're trying to accomplish or the rules of the game.

what happens when the glove drops?

ChatGPT

When the glove drops in the depicted setup, it will likely strike the seesaw. Depending on the force of the impact and the positioning of the glove, several outcomes are possible:

The Blue Ball is Launched: The force of the glove hitting the seesaw could catapult the blue ball into the air. The trajectory and distance the ball travels would depend on the angle and force of the hit.

Seesaw Movement: The seesaw will tilt sharply in response to the impact. If the glove hits with significant force, it might send the ball flying off the seesaw.

Balance Change: If the hit is not strong enough to launch the ball, it might just cause a significant movement in the seesaw, potentially causing the ball to roll off the end.

In many puzzle games, such setups are used to test the player's ability to control and predict the physics of moving objects. The exact reaction would depend on the specific game mechanics and physics engine used.

ChatGPT

The image you uploaded is a meme posted by Elon Musk, depicting a satirical take on how Apple and OpenAI use data. It shows a man and a woman drinking from the same coconut with straws, labeled "Your Data." The woman represents Apple with "iPhone" written on the coconut, and the man represents OpenAI. The text at the top reads "How Apple Intelligence works."

The implication is that Apple, through its iPhones, collects user data and shares it with entities like OpenAI. This reflects concerns about data privacy and how companies handle personal information.

If you have any specific questions about this image or need further analysis, feel free to ask!

Expand full comment
Amy A's avatar

The opportunity cost in all of this is mind bending. We must invest in genai or we will be left behind, they tell us. Meanwhile, the benefits aren’t materializing, and the tech that was supposed to solve climate change is going to take 9% of our electricity in 5 years.

Expand full comment
Mark Laurence's avatar

It’s impossible to please someone who is intent on criticising.

If OpenAI had fired out GPT-5 within a year of GPT-4, you’d be up in arms about their recklessness and lack of proper concern for building responsibly and giving humanity a chance to keep pace with the technology.

If they go slowly, you write a silly post like this one.

It’s very easy to be a criticiser (note: very different from a critic). You’ll never go out of work with your attitude Gary.

Would be more interesting to see you create something meaningful for the world instead.

Expand full comment
Gary Marcus's avatar

If 5 was out by now and as good as people imagined I would be out of job. If Santa Claus was as good as they say…

Expand full comment
Mark Laurence's avatar

1) That doesn’t make any sense at all.

2) People who are intent on criticising will always find something to criticise, and thus never be out of a job.

Expand full comment
User's avatar
Comment deleted
Jun 21
Comment deleted
Expand full comment
Gary Marcus's avatar

i also praised Cicero and wish we saw more stuff like that

Expand full comment
Clyde Wright's avatar

Is this Gary saying his full time job is OpenAI criticism?

Expand full comment
Mark Laurence's avatar

I’ve also been reading his work for some time now. About 18 months. I find it to be polarising in the emotive and very selective way the themes are presented and structured.

It’s that polarisation that I find to be non-helpful. In the extreme.

Just as it sounds like you are YZ, I’m very motivated for the discussions around generative AI to be balanced and nuanced. It concerns me that someone with this platform doesn’t use it as such.

I have a consultancy which is based around presenting the full dialogue of generative AI technology to SMBs and their leaders. That includes a comprehensive canvassing of the potential of the tech as well as always maintaining a focus on the risks (of which I say there and many, of a really serious nature), and how to use the tech responsibly and ethically.

I firmly believe that it’s only through balanced and nuanced discussion that we’ll collectively progress the technology to something that’s widely beneficial, as well as safe.

Nuance and balance is what the business community needs right now from AI commentators.

Not snarky, antagonistic, disingenuous rants, that 7/10 times focus on people (mostly Altman and LeCunn) rather than the tech and its larger themes. For someone who claims to be a scientist, most often this blog reads like a disgruntled AI gossip column.

That’s polarising. And that’s what I wish Gary would stop, and become more aspirational for his platform. For the benefit of all.

If you’d like an example of what I consider highly intelligent, nuanced discussion, Engines Of Engagement by Stodd, Schatz & Stead is an excellent read.

Expand full comment
Bob Katz's avatar

I agree with Mark - its disingenuous to say that once a significant breakthrough toward AGI had been achieved that it was straight-line path up from there. What has happened within the last couple years when the AI community has been working on this problem for 60+ years should be celebrated and encouraged, not dissed.

Gary Marcus - have you no shame?

Expand full comment
Costa's avatar

You sound like Lady Catherine de Bourgh from Pride and Prejudice 😂

Expand full comment
User's avatar
Comment deleted
Jun 21
Comment deleted
Expand full comment
User's avatar
Comment removed
Jun 22
Comment removed
Expand full comment
Lisa's avatar

That isn’t how people do things. We have general knowledge and common sense, much of that flowing naturally from being a corporeal being.

People don’t have to trial and error, “don’t drive into solid objects.” We work differently from LLMs, and failing to acknowledge that is a mistake.

Expand full comment
A Thornton's avatar

We are once again on the historic arc of AI systems: the General Problem Solver, Expert Systems, CycProject, Watson/Big Blue, etc.. By the end of this decade AI Numerologists will be claiming, "ChatGPT? Never heard of it."

Expand full comment