21 Comments

Employees: we are not so sure that we are benefiting humanity these days….

Expand full comment

Very little about AI benefits humanity, beyond the rhetoric.

Expand full comment

It's never been easier to read all my news as told by a pirate... Arrrrrrr!

Expand full comment

This is really about the employees and their feelings really…but this is, unfortunately, not a morality play it’s a power play. Big money usually wins out over morality.

So unless there is a stronger regulatory environment nothing meaningful in AI safety will get done.

However, LLM hype is dying down and ChatGPT is desperate; i.e., doing deals with Reddit to get data that sounds more like “natural language.” What a joke…pure garbage. LLMs can’t do what the hype says it can…a sucker is something I try not to be.

Expand full comment

This is why we need to lobby for regulations.

Expand full comment

And as usual, if you want to help, please join us at #PauseAI. We can empower you to help push for regulation.

https://discord.gg/C2yRUTjv

Its only your future, and the future of your children at stake, after all.

Expand full comment

Someone needs to look at scale.ai guys too. They just raised $1bil. and do most of the RLHF for the big models. Very little transparency and they're doing this: https://defensescoop.com/2024/02/20/scale-ai-pentagon-testing-evaluating-large-language-models/

Expand full comment

A Tale of Two Sams! One is in prison…the other…well, the other we shall see 😅

Expand full comment

Hi Gary. I enjoy your writing... even though it seems to have more than its fair share of typos. Do you need a proof reader to help out? I would be happy to give your posts a pre-peek.

Expand full comment

(I already offered – and jokingly suggested he could use ChatGPT with a prompt to do "low-level editing" only. :). He is on, I assume, a very fast research-and-publish schedule, within a busy life, so typos perhaps come with the territory. But maybe with a prod now and then he'll be more vigilant. :) )

Expand full comment

yep too much going on and wanted that out immediately but had to take an important call.. thanks to you both for your offers. someday i will have enough leisure to accept.

Expand full comment

But the typos are makes it authentic and passionate…fine with me…substance first!

Expand full comment

Way better idea than mine. I would only slow down the process:-)

Expand full comment

I always remember what a philosophy professor said, way back in college: "I don't want to see how good of an paper you can write. I want to see how good of an paper you can write by next Tuesday." 😆

Expand full comment

well done

Expand full comment

THOUGHT EXPERIMENT: Let's imagine that the whistle blower process works perfectly and all the big tech companies are thereby made ideal citizens of the republic.

QUESTION: How does that stop Putin, the Chinese Communist Party, the Iranian mullahs and other international bad actors from developing and deploying AI in any manner they wish?

CLAIM: The AI community seems lost in this fantasy that the West is the entire planet, and that we have powers of management we simply do not have.

REQUEST: Folks, please snap out of the AI alignment illusion. If after 75 years we can't get rid of the nuclear weapons which can destroy the entire modern world in literally minutes without any warning, AI is never going to be made safe.

You don't have that power. Nobody does. Whatever is going to happen is going to happen, and there's nothing you can do about it.

Expand full comment

There are harms we can mitigate or maybe even eliminate. The existential risks are uncontrollable, and very low probability in my estimation since we’re not on a path to develop AGI soon, but the harm to the low-compensation Mechanical Turks doing the guardrail testing and pruning the horrific images and ideas that LLMs generate needs to be dealt with. Also the harms done by applications of LLMs to surveillance, persuasion, and violence need to be mitigated, whether by regulation, direct action protests, or peer pressure (yeah, not likely to help, but worth a shot).

Expand full comment

I know Need Nanda from his time at Cambridge. He's a very, very smart guy.

Expand full comment

Thanks for sharing. See also this recent interview with Leopold Aschenbrenner. Mostly about the crazy scaling of the GPU clusters he claims are needed and the insane power required to run these clusters. https://youtu.be/zdbVtZIn9IM?si=uoqy5R30rv0tEkKg

Expand full comment