35 Comments
Mar 3Liked by Gary Marcus

On this I wholeheartedly agree with Elon, and you Gary. Incidentally, the key to all of this work many of us are doing is in your post scriptum:

"Gary Marcus doesn’t play favorites; he calls them like he sees them. When a LeCun or Musk gets things wrong he says so; if they get them right, he says that, too. Our choices in AI should be about ideas and values, rather than personalities."

Amen to that! If only the mainstream media could finally understand this...

Expand full comment

I just really hope you will be called as the expert witness and we all get to watch it live.

Expand full comment
Mar 4Liked by Gary Marcus

If OpenAI were forced to open source GPT-4 and future advances as a consequence of this suit would that be good for the world? Maybe, maybe not.

Expand full comment
Mar 3Liked by Gary Marcus

I'm sick of Musk and X etc. but I must agree with you here. OpenAI will never escape Microsoft alive. Here's to more open-source in the future that engineers like myself can actually work with privately to suit our needs. I don't want to repeat the Java Wars over again.

Expand full comment

Not just about honor. Wes Roth pointed out that Musk in his suit pointed out that allowing OpenAI non-profit status is analogous to allowing one team in a basketball game to double each point made.

Expand full comment

Isn't it really about Musk using OpenAI's shifting goals to keep them from dominating AI, an area Musk would very much like to colonize. He woke up one morning full of OpenAI envy/hate and realized he had a legal way to throw sand in their eyes. When it comes to Musk, it is pretty much all about personal ambition. Even his recent alignment with MAGA is because he hates DEI and COVID shutdowns as, in his eyes, they hurt his companies.

Expand full comment

"...something that is difficult to define (AGI)..."

There is no Artificial General Intelligence. There is no Artificial Intelligence. All that has been built or ever will is Augmented Inference. Trained Guesses.

Gary's investigations contribute greatly but does he have to continue to pretend to pay fealty to the intelligence fairy to have influence? I strongly think not. I wish he would stop diluting the impact of his posts with bows to Sam Altman's attempts to found and own a religion.

Expand full comment

To ensure that AI is indeed used for the benefits of humanity, I think that at some point world governments would have to consider creating a global AI research project and regulatory agency. Something like the ITER fusion reactor demonstrator project and the IAEA (International Atomic Energy Agency). This will accelerate the development of real AI (not the current fakery) and make sure that automation benefits all members of society, not just a bunch of tech bros.

Expand full comment

The change from 'digital intelligence' to 'AGI' in the mission is also noticeable. It opens up unrestrained moneymaking. After all, OpenAI's current mission is to ensure "AGI benefits humanity". Given that what they in reality *do* produce isn't AGI at all, they are free from the 'benefiting humanity' requirement for what they *do* produce. Brrr.

Expand full comment

this is a five minute video on artificial intelligence and creativity everyone should watch....just finished your book and I use plenty of example like you do on AI understanding instead of learning...I call it AI wisdom...has to be developed!

https://aeon.co/videos/why-strive-stephen-fry-reads-nick-caves-letter-on-the-threat-of-computed-creativity

Expand full comment

A non-profit lab achieved very promising results and wanted to grow. So it struck a deal to make a separate for-profit entity. All fair and square.

Expand full comment

Thank you for the "and deployment" part. ML is just about making use of DL as a technology, and even Hintons original goal of trying to understand the brain using xNN's as a model seems to have been a failure. The "leaders" in the field are high on their own next funding round hype.

Expand full comment

I wondering if Elon and Sam are coordinating this to bring more hype just before the GPT5 drop.

Expand full comment

But they _did_ open source some code! Let's not be rash! A _tiny_ sliver of *GPT-2* counts as open source (not even the full 40Gb GPT-2 model, but that's, uh, _semantics_ 🤯), does it not?

https://github.com/openai/gpt-2 (5 year old codebase!)

The libraries are so old I'm not sure if they can be loaded/run properly anymore on modern Python.

https://github.com/openai/gpt-2/blob/master/requirements.txt

And, from their blog: "Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights." (https://openai.com/research/better-language-models) Given that much more powerful models have been open sourced and easily available from HuggingFace and other public sources, is this a modern take on the fable where the dog starves the horse by denying it the hay?

Might cancel my streaming TV subscription - _this_ drama is more compelling... 😂

Expand full comment

Reasonably irrational to assume there's a profitless path to AGI

What type of empirical and or replicable/theoretical framework/publication has Marcus produced of late, such that he would be able to help act as a verifier? 🤔🤔🤔

Expand full comment