73 Comments

Citizen lack of awareness, understanding and interest is a real problem. We do not do our homework.

Expand full comment

that’s *exactly* why i wrote Taming Silicon Valley

Expand full comment

AI is frightening. It can be more destructive to humanity and civilization in the aggravate than the nuclear bomb.

Expand full comment

If you are equally concerned as we are by the "nuclear weapon that can make more nuclear weapons", I invite you to #PauseAI. We need to coordinate a response. I generally think that AI can be great, but like all things, with some degree of regulation.

Intelligence needs wisdom, too.

https://discord.gg/YufTgPE9

Expand full comment

can i share this or is it only for gary

Expand full comment

You can share it with everyone! Please do!

Expand full comment

Implausible. AI is more likely to generate massive benefits, especially if it can accelerate scientific and medical research.

Expand full comment

Ant: Super-ant-level intelligence is going to benefit all animals.

Human: Invents shoes, wheels, and ant-poison.

Expand full comment

Ant: There's no way that humans can attack us invisibly and just cause us to kill our own queen.

Human: creates bait

***

Human: Silly ants, there's no way that economic incentives will cause us to disregard all safety and create something that kills ourselves.

Human: creates AI

Expand full comment

Beware of all shining toys. Lots of pluses but when it comes to intelligence it will be garbage in and garbage out

Expand full comment

There might not be humans left after the "acceleration."

Expand full comment

1) Getting rid of Silicone Valley entirely wouldn't really solve anything. The Chinese, Indians and others would just pick up the ball and keep on running with it.

2) Getting rid of AI entirely wouldn't really solve anything either. Other vast powers would still continue to emerge from the knowledge explosion faster than we can figure out how to safely manage them.

3) Focusing on particular companies and technologies is a loser's game. So long as the knowledge explosion is unmanaged, by the time we solve one problem three more will appear.

Expand full comment

It sure seems this citizen quest for "trusted AI" is the wrong angle (or even a Trojan horse). This will mean a government agency (or even private) being the ones to look at the products and place their "trust" stamp on them. This could drastically slow down inventions, because where do you limit this new agencies power? We go through all kinds of "trusts" hoops to get an app through the Apple Store. What's to stop this new "trust" agency from saying they have to look at all apps - because they might have AI in them.

I'm really liking your thread (glad I found it) - actually found it randomly through the book "The Myth of AI" by Erik Larson - which I found randomly at the library since same name as great historical books by the other Erik Larson. Looking forward to your book.

As a Computer Scientist, my latest is I'm Boycotting AI. I'm with Richard Feynman with his book title "Pleasure of Finding Things Out". I don't need (or want) a co-pilot.

Cheers,

Expand full comment

Larson's book is excellent!

I wouldn't want the version of government regulation you describe. What I'd like is for regulation of tech companies' access to the raw inputs of AI (and a whole lot else): our data. We've taken the laissez faire approach, which has led to effectively unrestrained data collection, data brokers spreading it far and wide, and an internet economy optimised to deliver advertising.

Congress have been silent on this, and on permissable use of data for training AI. I would like to see them regulate at the data collection and distribution level, which would serve to tame the online wild west without needing granular regulation.

Expand full comment

I agree that’s a better approach. My fear is this “someone has to do something” will push to the oversight that isn’t helpful. The EU I’m sure is jumping at the bit - as they like to have control (of things they didn’t invent.)

Expand full comment

Doesn't seem so wrong to "slow down" something very dangerous; its a bit like if we all had to be on a new plane, then slowing it down to get some licensing and oversight on it is a very good idea.

Expand full comment

my point was .. don’t have an gov organization that will touch my code to see if it’s to be trusted (especially as I don’t have AI). Let the free market do that (if bad, you get sued like they are doing with AI now). The airplanes added their own “trust” organization and standards to help to not get sued (and keep flying.)

Expand full comment

I'd just add that, in order for AI companies to reigned in by the threat of civil lawsuits, we need laws for them to break.

Expand full comment

Would be fine if it didn't have existential risks involved. Sadly, this does.

Expand full comment

Don't forget the plentiful of fanboys and sycophants who constantly sing OpenAI and ilks praises, sweeping the points you listed under the rug as small bumps on the road to their "greatness". They've certainly done their parts.

On a light note, just for a laugh, here is my recent interaction with Llama 3 the latest and greatest open source LLM from Meta. As expected I got correct solution for the classic farmer/wolf/goat/cabbage crossing river problem from Llama 3 as the problem must have been within the training set. I then asked a follow-up question "What if I have two wolves in this case?". Here is Llama 3's response:

"A twist! With two wolves, you can't leave them together with the goat, as they might harm it. Here's a solution:

1. Take a goat across the river first and leave it on the other side.

2. Go back to the original side and pick up one of the wolves.

3. Take the wolf across the river and leave it on the other side with the goat.

4. Go back to the original side and pick up the second wolf.

5. Take the second wolf across the river and leave it with the other wolf and the goat.

This way, you've never left the wolves together with the goat, and everyone is safe and sound on the other side!"

So this is the kind of system whose underlying technology will lead us to AGI? UBI? A bright future for humanity?

What a joke?

Expand full comment

I love this so much. Nice work.

Expand full comment

Incredible! It would be funny if it weren't so trAIgic.

Expand full comment

Not to mention AI is far from artificial intelligence. It should accurately be renamed for what is: Automatic Information. There is nothing intelligent or knowing about it.

Expand full comment

I call it AA or GM, Algorithmic Automation or Generative Models. I always ALWAYS put 'AI' in quotes or even say 'so-called AI'. My own personal line in the sand.

Expand full comment

I have a lot of names like this I’ve created and use for the acronym AI: Automated Ignorance; Assistive Insipidity; Assembled Ingesta. They all have their own place to fit into the conversations.

Expand full comment

Oh wow - I wasn't aware of the "clawback" clauses - I mean we have vesting cliffs and 2-year noncompete agreements at Ingk.com, but we'd never propose taking equity back - wth? Who would want to contribute their work for that* company?

Expand full comment

The anti-education movement has won all recent battles. How can we explain the seriousness of topics like AGI to this audience when they lack an understanding or interest in economics, history, science, and mathematics beyond what they accidentally are exposed to on various visual media? The pot is already getting warm and the frogs think it's a free sauna.

Expand full comment

I dunno how much basic education has to do with this. Are credulous AI-hypesters really less well educated than their counterparts? Seems the other way around to me. I'd be willing to bet that less educated people are also less eager to incorporate AI into their daily lives.

Expand full comment

I like to think that we can get a large tent going given that the stake of all humanity and our values is at risk - right, left or center, nothing would survive unrestricted AI. I do still see a lot of copium(AI won't be able to do this!) and hopium(everything will be fine!) but I'm in the camp that everything will be fine if we actually get involved in something that's affecting our future.

Here's to #PauseAI.

Expand full comment

I’m not sure what you mean by the “anti-education movement”, but I do see a lack of good education in the US, not to mention a dumbing-down by the amplifier of social media culture, which will be further enhanced by a giving-over to AIs for doing the thinking and "creating" for people...

In any case: and now many want to place the AI as tutors, teachers or counselors!

Expand full comment

There has been a concerted effort by the extreme right to dismiss any "expert" that dares to espouse views of any of the mentioned subjects which conflicts with their "alternative facts". This has been present at all levels of education and public discourse now for over 20 years. Any source of information, starting mainly with what the extreme right calls "left wing liberal press", has been the subject of their rath. When all balanced sources of information are so denigrated, it becomes easier to indoctrinate those that weren't very educated from the start. This is the master plan for undermining western civilization which has now been enhanced via social media.

Expand full comment

What have we learned? Don't give deadly weapons to children. Don't give billions of dollars to people with no relevant life experience. Don't confuse a one-hit wonder with The Rolling Stones.

Expand full comment

I like Gary's list of things OpenAI have done to lose trust. I think there's something bigger, which I can only call a vibe: OpenAI just don't come across like serious people. They seem more like they're role-playing "serious people" in some sci-fi drama. Like, they'd be the serious people if real life was an episode of Star Trek.

The best example is their constant "maybe our technology is too powerful but we need it for humanity oh no what should we do???" moral crisis routine. The most recent instance is "Voice Engine"; here's what they have to say:

https://openai.com/index/navigating-the-challenges-and-opportunities-of-synthetic-voices/

The whole article is about how totally amazing Voice Engine is and how much good it could do for so many people but also how bad it could be if put into the wrong hands and how OpenAI are eager to form partnerships with governments and civil society leaders so that we can all work together to integrate Voice Engine into society in a responsible manner, blah blah blah.

Compare this with Google and Sundar Pichai. Pichai may be a hypester, he may sound like a salesman, but he doesn't pull this kind of bullshit. Google aren't putting out company statements that sound like they belong in an Asimov novel, or were drafted by the United Nations. OpenAI give me big time "we aren't being serious" vibes, which leads me to view everything they do with suspicion. At least we know what Google are up to: collecting all our personal data and turning it into ad revenue. They may not deserve our trust, but they're untrustworthy grown-ups in suits who take things seriously.

Expand full comment

thank you for sharing this. I was aware of most of this as it happened but not all of it, you filled in some gaps. Thanks. On your proposal on the need for independent scientists—absolutely—self funded, through grants, or independently by concerned individuals/groups and users. Governments can /should impose information access requirements—like ingredients labels or material standards from independent groups like ASTM etc. A given AI could be required to comply with ‘X’ independent standards and provide access to information/data groups ‘Y-Z’ . These requirements need to grow up with the AI efforts. As I’m not a scientist but a previous CSO, who was (still am) overly pushy about security standards for users information and user experiences. There are plenty of us who are very disheartened by how the elegance and opportunity of AI is being squandered and exploited. But I’m old enough to have seen this happen in technology many times before. The coming backlash will be strong and broad. Personally I’m using less and less cloud computing, emails, social networks, and writing more and more letters by hand and sending stamped snail mail. A company opening my letters and using them for AI training data would be charged with obstruction of correspondence, mail tampering, and mail fraud. And if the recipients were to retype them into a computer—what a waste of their lives. Otherwise my experience and knowledge is too valuable to carelessly risk it to the AI hoovers which are preying upon the innocent. There is much more intelligence in the cells in one of my finger tips than anything I’ve seen output from an AI system.

Expand full comment

Short description of OpenAI: "Shady AF"

Expand full comment

Society's willingness to go off the deep end with the latest unfounded claims about technology goes well beyond trusting any individual or organization. Trust must imply verification. Otherwise that's faith rather than trust.

Expand full comment

I agree, and there is much that can't be "verified". Claims of emergent abilities in LLMs could be critically assessed by searching their training data, but that's a secret. Post-training reinforcement methods are a secret. But most importantly, the output of generative AIs is inherently unpredictable. That's the whole point! In order to be maximally flexible and create human-looking output, they *can't* be predictable and they *can't* be reliable. The tech companies can't stop them from saying stuff that's wrong or offensive. Them's the breaks. They're desperate to not admit this.

Expand full comment

Does any of this hurt OpenAI enough with the smartest potential employees who might actually invent something extraordinary? I’m skeptical, but then Musk’s companies seem to have made more and more mistakes since his public image turned troll.

First step might be for the general public to recognize that CEOs are not the inventors. Ideally the advisory board would be a mix of CSOs, CTOs, top academics, and actual nonprofits, and also include people from other industries and disciplines. It’s amazing how little experts know about other fields. It makes normally smart people think that radiologists are simply image readers.

Expand full comment

https://patriciaburke.substack.com/p/may-27-28-safe-tech-international thanks Gary, I shared this today and also reading the archives, I appreciate your contributions and subscribed

Expand full comment

We are sorry Mr.Marcus that you are not one of these chosen to lead the US government's AI Safety and Security Board. This new entity needs proactive thought and leadership: not fearful criticism that may lead to stagnation and slow technical research and development in good AI.

Expand full comment

This behavior seems to be part of the ethical imprint on silicon valley higher ups. This needs to be acknowledged. The Silicon Valley types are treated like God's gift to the planet. We are the ones that fuel their platforms. We must not forget that. They can't win without us.

Expand full comment

Hi Gary, so happy to find you, courtesy Technoskeptic. I think you have a missing word and an extra letter in one paragraph near the end?

The one thing I would add is this: we can’t ?TRUST? governments here, either; mosts (SHOULD BE MOST?) governments, however well-intentioned, aren’t expert enough, and lobbying runs deep and at many levels.

I end up with typos too, but those are such good quotes.

(The FCC is the most criminally negligent group regarding RF safety exposure limits for wireless. that no one knows about)

So glad you are covering AI. I publish a news summary and will point to your work.

Expand full comment

Why is it that the most prominent leaders of AI companies, (OpenAI - Sam, Tesla - Elon), find the need to exaggerate and lie in ways that become quickly and easily discoverable with the passage of time (and not even that much time)? I mean between Elon’s FSD claims and sadly so much of what comes out of Sam’s mouth, it’s just comedy at this point. Following the court drama around Elon’s comp package has really opened the kimono on his deceptions. Then there’s Hinton, who I don’t know if to treat as a senile old dude, someone lacking understanding for what he has been working on all these years, or just a research scammer.

Expand full comment