20 Comments
User's avatar
Amy A's avatar

Employees: we are not so sure that we are benefiting humanity these days….

Expand full comment
Shon Pan's avatar

Very little about AI benefits humanity, beyond the rhetoric.

Expand full comment
David Sterry's avatar

It's never been easier to read all my news as told by a pirate... Arrrrrrr!

Expand full comment
Perry C. Douglas's avatar

This is really about the employees and their feelings really…but this is, unfortunately, not a morality play it’s a power play. Big money usually wins out over morality.

So unless there is a stronger regulatory environment nothing meaningful in AI safety will get done.

However, LLM hype is dying down and ChatGPT is desperate; i.e., doing deals with Reddit to get data that sounds more like “natural language.” What a joke…pure garbage. LLMs can’t do what the hype says it can…a sucker is something I try not to be.

Expand full comment
Shon Pan's avatar

This is why we need to lobby for regulations.

Expand full comment
Shon Pan's avatar

And as usual, if you want to help, please join us at #PauseAI. We can empower you to help push for regulation.

https://discord.gg/C2yRUTjv

Its only your future, and the future of your children at stake, after all.

Expand full comment
Harry Bernstein's avatar

Someone needs to look at scale.ai guys too. They just raised $1bil. and do most of the RLHF for the big models. Very little transparency and they're doing this: https://defensescoop.com/2024/02/20/scale-ai-pentagon-testing-evaluating-large-language-models/

Expand full comment
B. Earl's avatar

A Tale of Two Sams! One is in prison…the other…well, the other we shall see 😅

Expand full comment
Andy's avatar

Hi Gary. I enjoy your writing... even though it seems to have more than its fair share of typos. Do you need a proof reader to help out? I would be happy to give your posts a pre-peek.

Expand full comment
Eric Cort Platt's avatar

(I already offered – and jokingly suggested he could use ChatGPT with a prompt to do "low-level editing" only. :). He is on, I assume, a very fast research-and-publish schedule, within a busy life, so typos perhaps come with the territory. But maybe with a prod now and then he'll be more vigilant. :) )

Expand full comment
Gary Marcus's avatar

yep too much going on and wanted that out immediately but had to take an important call.. thanks to you both for your offers. someday i will have enough leisure to accept.

Expand full comment
Perry C. Douglas's avatar

But the typos are makes it authentic and passionate…fine with me…substance first!

Expand full comment
Andy's avatar

Way better idea than mine. I would only slow down the process:-)

Expand full comment
Eric Cort Platt's avatar

I always remember what a philosophy professor said, way back in college: "I don't want to see how good of an paper you can write. I want to see how good of an paper you can write by next Tuesday." 😆

Expand full comment
AKcidentalwriter's avatar

well done

Expand full comment
Aaron Turner's avatar

I know Need Nanda from his time at Cambridge. He's a very, very smart guy.

Expand full comment
Richard Smit's avatar

Thanks for sharing. See also this recent interview with Leopold Aschenbrenner. Mostly about the crazy scaling of the GPU clusters he claims are needed and the insane power required to run these clusters. https://youtu.be/zdbVtZIn9IM?si=uoqy5R30rv0tEkKg

Expand full comment
User's avatar
Comment deleted
Jun 5, 2024
Comment deleted
Expand full comment
Bruce Cohen's avatar

There are harms we can mitigate or maybe even eliminate. The existential risks are uncontrollable, and very low probability in my estimation since we’re not on a path to develop AGI soon, but the harm to the low-compensation Mechanical Turks doing the guardrail testing and pruning the horrific images and ideas that LLMs generate needs to be dealt with. Also the harms done by applications of LLMs to surveillance, persuasion, and violence need to be mitigated, whether by regulation, direct action protests, or peer pressure (yeah, not likely to help, but worth a shot).

Expand full comment