51 Comments

I haven't noticed much in the way of ethical behaviour from Google; rather the reverse if anything. Their AI project may currently be less bad than OpenAI, but I expect that it will regress to the Google norm in time.

Expand full comment

The hammer needs to hit them all.

Maybe not Anthropic.

Expand full comment
Comment deleted
May 23
Comment deleted
Expand full comment

Open source development is DEFINITELY NOT the way to safety. Its bad enough having to corral corporations without potentially handing extremely powerful technologies to rival governments, terrorists and others like accelerationists who explicitly seek the end of humanity.

Anthropic has at least produced good research, which can be used to prevent or lower the risk of AI takeover. In fact, in terms of capabilities versus safety, they've delivered safety more than anyone else.

https://time.com/6980210/anthropic-interpretability-ai-safety-research/

Expand full comment
Comment deleted
May 23
Comment deleted
Expand full comment

Shooting yourself in the face to spite your nose is no way to survive a future, especially with all of humanity on the line with us. We also don't open source nuclear weapons, and biological weapons for reason.

Expand full comment
Comment deleted
May 23
Comment deleted
Expand full comment

This is a little bit of a ridiculous perspective on AI. Closed-source AI development is not necessarily "bad". I may say, shit, I am a good father, a decent person with all my deficiencies and problems, a good friend, a good scientist, I do not trust the people out there, look at this shithole of a planet, for the most part. Let's create an AGI company and make it as closed as possible, hire the best people we can, because the world out there is shit. That does not automatically make you a "bad" company lol. Why should I share MY AGI technology with you? It does not mean that I am making AGI to hurt you, it just means that I am not happy or OK with YOUR prejudices or biases affecting MY AGI, even if I am making AGI that will benefit you in the future. You are welcome to create your own AGI gig and compete or fight with me, but it does not mean that I have to open MY TECHNOLOGY to you or anyone else.

Expand full comment

If you have low expectations you're hard to disappoint, so I get it - the world is a rough place. At the same time I think reverting to cynicism about the company as a whole is unfair to the AI team who are genuinely doing good things.

Anything can of course still happen in the long run but I believe good efforts deserve recognition and praise, otherwise you're contributing to a world where eventually nobody sees the point in doing any good.

Expand full comment

I'm retired from working in digital technology, and have consequently seen far too much of how the sausage tends to be made.

In particular, I've seen a metric ton of hypocrisy - claiming one thing while doing another. This usually comes from the top down, and some of the youngest software engineers believe and are inspired by the idealistic hype, but others understand they are supposed to "say what I say, but do what I do," with the two wildly inconsistent.

Obviously I haven't worked for every single tech company, and in particular have not personally worked for Google. (But I know people who have, and it sounds like a normal big tech company to me.) There are plenty of decent people there, behaving decently except when instructed otherwise, and sometimes even working around their instructions. That's true almost everywhere. But it tends not to be enough to prevent overall bad effects, when incentives skew towards encouraging bad behaviour, particularly when the bad behaviour is at the expense of strangers (such as the general public, future generations, etc.)

Expand full comment

I agree but like you said this is mostly the same everywhere.

I still think it is better to have a slogan that says "do no evil" versus "let the world burn".

It will impact the people you attract as you said and therefore your company, whether the board likes it or not.

If you're point is that it is sad that people will be disappointed, face disillusion, feel misled and may end up cynical I agree.

But I don't think changing the slogan to "let the world burn" is a constructive solution even though - once you end up cynical - it may feel more appropriate. But that is also because a part of cynicism is that it means you have given up and might want to see things burn out of spite.

The biggest challenge in life is to stay optimistic and hopeful and strive for the best. You will end up disappointed many times but it can still be the best strategy with the best outcomes overall.

This is also why I think having the same people in power too long is risky. Cynism easily leads to corruption because why should you continue to care of time and time others disappoint you?

This is not to discriminate against old people individually, just to say it's challenging to shed naivety without replacing it with cynicism. To keep trying and striving is exhausting and deserves respect.

Expand full comment

I am glad that there are people like you still trying to improve human behaviour. Without those efforts, things would be even worse than they actually are.

Balance is required - history is littered with attempts to create perfection, that turned into particularly nasty oppressive regimes. It's also littered with martyrs, many (most?) to causes that never succeeded.

But it's also full of real improvements, both short term (something was good for a while) and long term (e.g. the very long term trend of reduction of violent death - see Pinker if this is unfamiliar).

Each person gets to judge for themselves in each particular case whether to tilt against those particular windmills/try to right those particular wrongs.

Those choices are partly a matter of personality, partly of their personal situation, and partly of experience. Age is probably also a factor, as you suggest. (I'm not sure whether my current cynicism is due to specific experiences, generational experiences - the 1960s were more optimistic than the 2020s - or personality.)

Expand full comment

Yeah I wasn't trying to say you as a person are very cynical - even if you displayed some cynicism.

Sometimes we have to vent or are in a bad mood. Everyone is cynical about things sometimes. I know I've posted some rants here and there in my time. It depends also on what occupies you at any given time.

Maybe that's the good thing, that negativity and cynicism can also weather with time.

Expand full comment

Annoying that you can't edit out spelling errors. I obviously meant your* instead of you're point.

Expand full comment

FWIW, Substack has the ability to allow users to edit their own replies for some period of time after making them. Marcus has this enabled, you get to it via the horizontal row of three dots on the far right of the row with like, reply and share buttons.

Expand full comment
Comment removed
May 23
Comment removed
Expand full comment

Don't forget they also collaborated on enforcing the great firewall of China

Expand full comment
Comment deleted
May 25
Comment deleted
Expand full comment

No I was referring to Google.

Expand full comment
Comment deleted
May 25
Comment deleted
Expand full comment

The story with Google and China is complicated and goes back many years. They've flip-flopped a few times now about their approach to working (or not) with the CCP. But there was a time when they enforced the censorship in the 00's, and they were working on a new censored search product as of 2018 strictly for the Chinese market. Background here:

https://theconversation.com/googles-censored-chinese-search-engine-a-catalogue-of-ethical-violations-101046

All of this is probably a distraction from the fact that they now do *domestic* censorship as well (in more subtle ways, deranking, etc).

Expand full comment

They are hanging themselves via there own words!! Impressive investigation Gary. I tip my hat. Much respect. This I hope decrease the haze that these companies seem to cause people. #beautiful

- This needs to be everywhere

Expand full comment
Comment deleted
May 23
Comment deleted
Expand full comment

Yes the public will be convinced this but some will think for themselves. What I call a 'distinct minority'. It is the mindset I focus on. I like your assessment.

Expand full comment

Worrying about OpenAI 'winding up first to AGI' kind of suggests they potentially are on a road there. Do you really want to entertain that suggestion?

Expand full comment

I don't think Gary has ever said otherwise, or suggested AGI wasn't possible? I'd guess he thinks the mental models of high-ups at OpenAI are wrong though. In the past, Sam Altman has acted in a way consistent with the hypothesis that he thought scaling up GPTs larger and larger would lead directly to AGI, and Gary has always (correctly) said it won't. But there are tons of smart people at OpenAI, and as long as the money keeps rolling in I expect them to eventually make some important technical breakthroughs toward AGI, which they say is their only mission after all.

Expand full comment

Mr Marcus, in many of your posts, you mention current efforts as a dead-end on the road to AGI (I agree with you there). But in some other posts like this one, you mention “who gets to AGI first” which seems as if the road actually leads there after all. Where do I misunderstand you please, could you help me clarify?

Expand full comment

I think it’s meant rethorical: a company, any company for that matter, that has AGI as its mission should be held at the highest standards and be given maximum scrutiny.

Expand full comment

From this perspective it indeed makes sense, thank you!

Expand full comment

I was also a bit taken aback by that remark.

Expand full comment

Everything is reduced to the cash nexus - as Marx helpfully pointed out. Thanks for fighting on Gary, And for keeping the rest of us in the loop.

Expand full comment

Didn’t they release proof they didn’t use ScarJos voice? I’ve only seen the headlines haven’t had a chance to dig in!

Expand full comment

they didn’t use it but they pretty clearly tried to impersonate her, and legal precedent in Midler vs Ford means that’s a no go.

Expand full comment

All this fuss about OpenAI is a distraction. This is a quiet period before piracy storm.

Soon we’ll have AI chips in consumer devices and software that will provide user friendly ways for training LLMs for your own personal use.

And then it will be same as Napster days, but instead of MP3s we will download (unofficial) snapshots of Reddit, Quora, StackOverflow, etc. and train your personal AI.

And you will be able to take movie/tv show recordings and train your personal AI to be any actor or celebrity.

Remember old days of TomTom navigator and how you could download spoken directions by celebrities (imitations) despite them never recorded those?

Nobody is talking about this upcoming storm.

Expand full comment

Good point. If I may be optimistic for a moment, this desire to be closer to the celebrity in question points to the powerful social draw that underlies how we respond to celebrity. That makes celebrity (and the artistry that creates it) intrinsically hugely valuable. AI is never going to replace that; "I'm a huge fan of Stockfish" just isn't a thing even if it can defeat Magnus Carlsen every game it plays. People with talent will continue to earn good-to-great livings even if they don't benefit every time someone clicks. As Oliver Wilde quipped: "there is only one thing worse than being talked about, and that is not being talked about".

Expand full comment

I don't think we have to worry about OpenAI finding their AGI Holy Grail. They danger is in them successfully convincing others to go sticking AI in all sorts of places it doesn't belong. That's the business model for almost all of the consumer-products industry.

Expand full comment

I think since that whole debacle back in Nov, Sam Altman is now in his "I dont trust anyone anymore" (Lex Fridman pod) phase. As if he think he was so naive before believing in ethical governance or something. He has a marvel baddie look about him now he's been burnt......

Expand full comment

I think one needs to define what "benefiting humanity" means exactly because as it is it's too loose. As an example, for the Chinese government it might mean better algorithms to detect faces and identify people. For me, the best thing would be for it go back into the hole it crawled from.

Expand full comment

Reading between the lines, it seems to mean "make Star Trek come true"

Expand full comment

Well, we don't seem to be getting to Star Trek sadly but Matrix with Idiocracy.

Expand full comment

I think ScarJo is not a mistake but a concerted effort to lure an Artist into challenging the fact that someone can have a similar voice.

If so it is a devious trap. They made it so it was very believable they did steal her voice. But they also primed us (and her!) to hear her own voice.

It is hard to say whether the strategy works long term but you have to remember that they're not after convincing the already vocal critics that you may be surrounded by.

To the general public that may not share what are essentially niche concerns and the accompanying emphasis on copyright issues, the takeaway may not be what you think.

They may end up seeing a millionaire artist trying to ban the voice of a small time voice actor.

Artists may actually get the short end of the stick in this in the general public perception.

Too often the choir of criticism heard in media outlets is confused for the actual public opinion. Trump was broadly ridiculed and dismissed and then won an election to the complete surprise of (I'd say) most people in the media business. I'm not gloating over Trump winning that election, I'm saying it is dangerous to assume you're correct because the people around you think so. It always sucks to be wrong.

This may be a similar thing.

And I think I may not be the only one to realize it. Many outlets stopped covering ScarJo's case and immediately moved on to talking about the larger picture and every other wrong found with OpenAI.

That is not illogical, but the public is not stupid. OpenAI allowed plenty time for the initial outrage to swell, I think purposefully.

This may actually still be a win for OpenAI.

They may actually end up reinstating the Sky voice (perhaps with a forced disclaimer it is not ScarJo and a disingenuous apology).

But regardless further coverage of this specific case from now on probably helps, not hurts, OpenAI.

Expand full comment

Social ownership of these companies may be the only possible way forward. Fundamentally, such hugely powerful organisations should not be in the hands of a small few to do with as they please, so ownership should be equalised throughout society. All of us built these companies with our data, whether we knew it or not at the time, so there's an argument that we deserve a stake in them. Even disregarding that argument, whoever ends up wielding this technology will have immense, potentially at some point godlike, power. So it should be humanity as a whole (or at the very least, the democracies of the world) who collectively own and democratically govern such power. Just as political power should not be authoritarian, neither should economic or technological. How exactly this ownership model would pan out I'm not sure, would love anyone's thoughts on that.

Expand full comment

Marcus writes, "Instead, they appear to be focused precisely on financial return, and appear almost indifferent to some the ways in which their product has already hurt large numbers of people (artists, writers, voiceover actors, etc)."

And you appear to be almost indifferent to the substantial benefits ChatGPT has provided many people. My site would have been impossible without it.

I'm sorry, but your coverage of AI is too one sided to be credible.

Expand full comment

Helpful as always. Thank you for reporting on these companies' behaviors!

Expand full comment

Respect. And Altman's mastered the art of putting out CorpSpeak that means nothing.

Expand full comment