This is really about the employees and their feelings really…but this is, unfortunately, not a morality play it’s a power play. Big money usually wins out over morality.
So unless there is a stronger regulatory environment nothing meaningful in AI safety will get done.
However, LLM hype is dying down and ChatGPT is desperate; i.e., doing deals with Reddit to get data that sounds more like “natural language.” What a joke…pure garbage. LLMs can’t do what the hype says it can…a sucker is something I try not to be.
Hi Gary. I enjoy your writing... even though it seems to have more than its fair share of typos. Do you need a proof reader to help out? I would be happy to give your posts a pre-peek.
(I already offered – and jokingly suggested he could use ChatGPT with a prompt to do "low-level editing" only. :). He is on, I assume, a very fast research-and-publish schedule, within a busy life, so typos perhaps come with the territory. But maybe with a prod now and then he'll be more vigilant. :) )
yep too much going on and wanted that out immediately but had to take an important call.. thanks to you both for your offers. someday i will have enough leisure to accept.
I always remember what a philosophy professor said, way back in college: "I don't want to see how good of an paper you can write. I want to see how good of an paper you can write by next Tuesday." 😆
Thanks for sharing. See also this recent interview with Leopold Aschenbrenner. Mostly about the crazy scaling of the GPU clusters he claims are needed and the insane power required to run these clusters. https://youtu.be/zdbVtZIn9IM?si=uoqy5R30rv0tEkKg
There are harms we can mitigate or maybe even eliminate. The existential risks are uncontrollable, and very low probability in my estimation since we’re not on a path to develop AGI soon, but the harm to the low-compensation Mechanical Turks doing the guardrail testing and pruning the horrific images and ideas that LLMs generate needs to be dealt with. Also the harms done by applications of LLMs to surveillance, persuasion, and violence need to be mitigated, whether by regulation, direct action protests, or peer pressure (yeah, not likely to help, but worth a shot).
Employees: we are not so sure that we are benefiting humanity these days….
Very little about AI benefits humanity, beyond the rhetoric.
It's never been easier to read all my news as told by a pirate... Arrrrrrr!
This is really about the employees and their feelings really…but this is, unfortunately, not a morality play it’s a power play. Big money usually wins out over morality.
So unless there is a stronger regulatory environment nothing meaningful in AI safety will get done.
However, LLM hype is dying down and ChatGPT is desperate; i.e., doing deals with Reddit to get data that sounds more like “natural language.” What a joke…pure garbage. LLMs can’t do what the hype says it can…a sucker is something I try not to be.
This is why we need to lobby for regulations.
And as usual, if you want to help, please join us at #PauseAI. We can empower you to help push for regulation.
https://discord.gg/C2yRUTjv
Its only your future, and the future of your children at stake, after all.
Someone needs to look at scale.ai guys too. They just raised $1bil. and do most of the RLHF for the big models. Very little transparency and they're doing this: https://defensescoop.com/2024/02/20/scale-ai-pentagon-testing-evaluating-large-language-models/
A Tale of Two Sams! One is in prison…the other…well, the other we shall see 😅
Hi Gary. I enjoy your writing... even though it seems to have more than its fair share of typos. Do you need a proof reader to help out? I would be happy to give your posts a pre-peek.
(I already offered – and jokingly suggested he could use ChatGPT with a prompt to do "low-level editing" only. :). He is on, I assume, a very fast research-and-publish schedule, within a busy life, so typos perhaps come with the territory. But maybe with a prod now and then he'll be more vigilant. :) )
yep too much going on and wanted that out immediately but had to take an important call.. thanks to you both for your offers. someday i will have enough leisure to accept.
But the typos are makes it authentic and passionate…fine with me…substance first!
Way better idea than mine. I would only slow down the process:-)
I always remember what a philosophy professor said, way back in college: "I don't want to see how good of an paper you can write. I want to see how good of an paper you can write by next Tuesday." 😆
well done
Theranos vibe? https://sadnewsletter.substack.com/p/vapoware-20-and-silicon-valley
I know Need Nanda from his time at Cambridge. He's a very, very smart guy.
https://situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf
Thanks for sharing. See also this recent interview with Leopold Aschenbrenner. Mostly about the crazy scaling of the GPU clusters he claims are needed and the insane power required to run these clusters. https://youtu.be/zdbVtZIn9IM?si=uoqy5R30rv0tEkKg
There are harms we can mitigate or maybe even eliminate. The existential risks are uncontrollable, and very low probability in my estimation since we’re not on a path to develop AGI soon, but the harm to the low-compensation Mechanical Turks doing the guardrail testing and pruning the horrific images and ideas that LLMs generate needs to be dealt with. Also the harms done by applications of LLMs to surveillance, persuasion, and violence need to be mitigated, whether by regulation, direct action protests, or peer pressure (yeah, not likely to help, but worth a shot).