68 Comments
Nov 23, 2023·edited Nov 23, 2023Liked by Gary Marcus

From an ML perspective, LLMs are an amazing achievement. Who would have expected such impressive (from the perspective of an external observer) performance from such a simple NN model trained with vast amounts of (relatively low-quality) data, and vast amounts of compute?

From an AGI perspective, however, if your objective was to build literally the *worst* AGI possible (relatively weak cognition (lots of data plus statistical induction plus analogical reasoning, but no deduction or abduction), extremely shallow (if any) internal model of the physical universe, appallingly misaligned with aggregate human preferences, no understanding of how it actually works inside, let alone mathematically well-founded), then you couldn't have done a better job.

I await further details of Q* with trepidation...

Expand full comment
Nov 23, 2023Liked by Gary Marcus

What's really wrong at Open AI is that all female board members are out, whereas all the men involved still have roles. Adam D’Angelo and Ilya Sutskever were 50% of one faction. Altman and Brockman 100% of the other faction. Net result…. Although technically Altman, Brockman and Sutskever are no longer on the Board, they are still in place. And we got two new rich white guys, Larry Summers and Bret Taylor. I’ve lost track of the score but I think we’re around -12 for Human Intelligence vs 0 for AGI.

Expand full comment

"Me being me, I called bullshit ..."

Haha. Me too. My take is that billions of dollars will be wasted on LLMs and OpenAI in the quest for AGI. $100B were wasted on AVs. Investors seemed to have not learned anything from that painful lesson. You and I can save investors busloads of cash but they won't listen. I don't know about you but I'm very affordable. My advice is free.

Expand full comment
Nov 23, 2023Liked by Gary Marcus

Or possibly it's a reference to “the most familiar Q is portrayed by John de Lancie. He is an extra-dimensional being of unknown origin who possesses immeasurable power over time, space, the laws of physics, and reality itself, being capable of altering it to his whim. Despite his vast knowledge and experience spanning untold eons, he is not above practical jokes for his own personal amusement, for a Machiavellian or manipulative purpose, or to prove a point. He is said to be almost completely omnipotent and he is continually evasive regarding his true motivations.”

https://en.wikipedia.org/wiki/Q_(Star_Trek)

Expand full comment
Nov 23, 2023Liked by Gary Marcus

The cynic in me suspects that this whole debacle, along with the hint of a huge breakthrough, is largely marketing (given the how much rapid commercialization was pushed at OAI dev-day). OAI is facing growing competition in the GAI space, and what better way to to crank up the hype engine and keep OAI front-and-center in the news. Reminds me of the crypto pile-on, but with potential for significantly greater negative societal impact (near-term and long-term)

Expand full comment
Nov 23, 2023Liked by Gary Marcus

Gary is mentioned in this article by David Brooks:

https://www.nytimes.com/2023/11/23/opinion/sam-altman-openai.html

"A.I. is a field that has brilliant people painting wildly diverging but also persuasive portraits of where this is going. The venture capital investor Marc Andreessen emphasizes that it is going to change the world vastly for the better. The cognitive scientist Gary Marcus depicts an equally persuasive scenario about how all this could go wrong."

Expand full comment
Nov 23, 2023Liked by Gary Marcus

Yep. The Bing example is just too funny (and too telling). The core capability of this technology is not being truthful, it is being creative (with the truth, among other things)

Expand full comment
Nov 24, 2023Liked by Gary Marcus

Great stuff as normal Gary! What I would add about so-called breakthroughs can be summed up by suggesting a read of "Crossing the Chasm," by Geoffrey Moore. He nicely explains the route from early creation and early-adopters to a real industry, and what that takes (tons of additional investment, trial and error, interaction with customers, cost management, distribution systems, building after-market systems, etc.). There is so much willingness to believe the hype (driven by greed and FOMO). I am just watching the Bloomberg long video on FTX/SBF. It would be so instructive for people to see this and be reminded of this kind of people worshiping the Sam Altman is getting. Sam is human and flawed, just like SBF. We are at the earliest stages of creating and understanding how to use AI. We also have quantum computing, robotics and gene-editing breakthroughs on the way. How some of that may integrate to create new products and services is beyond anyone's understanding. That is the long term game which will demand a lot of risk taking. Keep up the good work Gary! At some point a scenario-based learning process may provide some guidance as we learn our way forward.

Expand full comment
Nov 23, 2023·edited Nov 23, 2023Liked by Gary Marcus

Perhaps Q* could be a homage to Star Trek's "Q" character - an immensely powerful, god-like being capable of manipulating time, space, and reality itself.

**EDIT: I see someone else already alluded to this in a previous comment

Expand full comment
Nov 27, 2023Liked by Gary Marcus

Sutskever was the primary instigator. The other board members held the positions that they were appointed to hold. Providing viewpoints of the dangers of AGI.

The people to blame here are not the people doing their jobs but the people who created the ridiculous boneheaded structure and then allowed it to lose board members to the point of not functioning as a board - and to be clear it was not ever intended to function as a fiducial board. And the people who set that up, and put it in play, are the people who are still in power.

Expand full comment

I keep getting back to the amazement about us humans. It is so clear that 'breakthroughs' aren't that. That 'understanding' isn't that (it can't). That 'learning' is not an ability of these models, but it is a misplaced word that stands for optimising parameters in a very large formula (actually, reacting on the prompts — zero-shot, one-shot, few-shot — might be called the 'learning' it is capable of, and that is really shallow).

But while it is pretty clear none of this is anyway a step on the road to OpenAI's professed goal of AGI, the world is awash with people who are convinced it is. Why? This hype is telling us more about the limitations of human intelligence, than about the development of AGI.

In the meantime, the real issue indeed is how these technologies in the hands of 'evil humans' are going to wreak havoc in our societies made up of (all of us) intellects that have little power breaking free of convictions.

Expand full comment

Human stupidity is far more dangerous than AGI.

Expand full comment

I don't think Q* is a big breakthrough. But it does not need to be. This is incremental work.

I think current methods are not a dead-end, and they have a lot to give. What matters for now is assistants that are becoming smarter and more reliable. These methods may also hint at future directions to remove their limitations.

Expand full comment

This is basically what I said yesterday in response to the Reuter's report, but from a more philosophical and biting angle (and mentioning the Longtermism and Effective Altruism at work behind the scenes with some on the board and in the AI field in general).

https://twitter.com/itsgottabenew/status/1727517681849626904

Expand full comment

https://gist.github.com/B-R-P/89db51ca89a5170a88b107bce15c76f9

Small program making use of q-star equation/algorithm

Expand full comment

you debunked something that wasn't announced, released, or demonstrated, well done... but yeah it's true there was a lot of hype and not a lot of actual info

Expand full comment