44 Comments
User's avatar
Aaron Turner's avatar

Moral courage is doing what you know to be right even when it costs you. This seems to me to be a concept from a bygone era. Fewer and fewer people in positions of power display moral courage.

Expand full comment
Nathalie Suteau's avatar

2 years ago is the day I started to stop trying to use AI but to test it instead. It’s a fraud of the highest magnitude. It’s useless except for nefarious things like propaganda and deep fake. Sam Altman should be investigated for fraud and convicted. It would put an end to this madness.

Expand full comment
Gerben Wierda's avatar

It goes too far to say all AI is 'a fraud of highest magnitude'. It's not intelligent, results are often of questionable quality, training material is often 'stolen', there is a large amount of damage it can/will do, Sam c.s. have turned out to be both fools and/or unreliable, but ... AI isn't a complete fraud and can have value.

Expand full comment
Nathalie Suteau's avatar

can have value: you wrote by mistake that it has no value for now. If it has no value for now, it’s fraud or selling a product which doesn’t exist. English isn’t my native language and I write on a phone. I’m more accurate than you are.

Expand full comment
Gerben Wierda's avatar

In English, "can have value" doesn't exclude it can have 'value' for its users in the here and now. And it does. Much of this 'value' comes from it being 'cheap' and the results may largely become as dystopian as the 'cheap' that the physical automation revolution provided roughly two centuries ago.

No need to become personal.

See for instance https://ea.rna.nl/2024/07/27/generative-ai-doesnt-copy-art-it-clones-the-artisans-cheaply/ or the series it is part of.

Expand full comment
Nathalie Suteau's avatar

Absolutely not. Do you know what happened to xAI and ChatGPT? There was no cyber attack against them but excess prompting.

Expand full comment
Oleg  Alexandrov's avatar

Investigating Sam Altman won't put an end to anything. This is heavy-handed wishful thinking.

Expand full comment
Nathalie Suteau's avatar

It’s a start. He’s spent the last 15 years going from one startup to another and receiving money from series funding. It’s an easy way to make money. The others except for Google DeepMind: they imitate OpenAI. There’s no innovation. There are even employees of xAI posting on X that they want all the LLMs to reach a unification point. It’s as if MS and Apple were selling the same OS in the 80s. What’s the point of all these LLM if they all do the exact same thing? Since when someone became a billionaire while selling nothing? I’m going to add that the Tesla FSD can be considered as a fraud too: it’s selling beta products. Beta products are for free or the users are paid to test them. Musk and Altman have done the opposite: selling beta products to mine the users.

Expand full comment
Oleg  Alexandrov's avatar

Sure, you can choose to pester individual people. This is a terrible way of dealing with the actual problems.

The reality is that LLMs is a very powerful technique, despite its well-known limitations. Companies will continue to push them, because there is a profit to be made in certain niches, even if its abilities are greatly exaggerated.

So, an investigation of Altman won't happen, and even if it did, he did not do anything legally wrong, and even if he did, that won't make a difference.

This a is feel-good counterproductive approach.

Expand full comment
Nathalie Suteau's avatar

We know nothing about you. You seem not to have listened to Gary Marcus with Ed and Brian Krassenstein yesterday. Gary tried his best to ensure the ethics of AI. I tried too by being in the community notes on X for 2 years. What have you done? Who are you? You seem to promote AI and I suspect you’re Russian.

Expand full comment
Oleg  Alexandrov's avatar

Sigh. A personal attack in lieu of discussion. I am also not Russian. Names are deceiving.

To add, what you propose is highly misguided. The commercial sector doesn't function as do-gooders as yourself wish. Know thy enemy.

Expand full comment
Nathalie Suteau's avatar

You could have taken the opportunity to explain who you are. You didn’t and your Substack is locked to non subscribers while you have no content. What I explained is accurate while you’re trolling.

Expand full comment
jibal jibal's avatar

ad hominem creep blocked

Expand full comment
Cameron's avatar

What are your thoughts on alpha evolve gary? Are the new discoveries legit or is it just more Google hype?

Expand full comment
Gary Marcus's avatar

preliminary take: it’s interesting and legit, albeit with a touch of hype, scope tba. i plan to read more carefully soon.

Expand full comment
Cameron's avatar

I look forward to your take on it

Expand full comment
Jan Steen's avatar

Altman could echo Groucho Marx: "These are my principles. And if you don't like them, I have others."

Except that Groucho was joking.

Expand full comment
Mohak Shah, PhD's avatar

Thanks to the hyperbole on both ends of the promise of AI, among other things, the AI policy discussion has been caught in a perpetual cycle of predicting a hypothetical AI future while the current impacts are already evident and need critical attention.

In the name of policy proposals, we continue seeing piecemeal attempts that are typically decoupled from the needed outcomes.

Rather than disparate policy proposals, we need an agreed upon *framework* that focuses on the end-results — outcomes. If there are any takers, here’s one proposal (put out a few months ago but we collectively seem to be moving away from a comprehensive, informed discourse):

https://arxiv.org/abs/2411.08241

Expand full comment
John Levine's avatar

See the interview in the Financial Times last weekend, in which he made lunch for the FT's editor Roula Khalaf. He totally charmed her, to an embarrassing level.

Expand full comment
Kenneth Lerman's avatar

I recently wanted to produce an illustration for a music program I was writing. The title of the concert was Soar. I wanted an eagle flying over some music, so I googled and found some images of eagles flying. All of them had associated copyrights and required royalies.

Instead I asked some AI, I forget which, to create a picture of an eagle soaring over music. I liked the image and used it. I believe the eagle was taken from one of the images that wasn't free.

So, it looks like if I steal it, I can be liable for damages, but if some AI uses it, its free.

I'd like to see just one of the copyright owners sue the trainer of the AI for statutory damages. It isn't fair use if you copy the entire image and create a derived work from it.

Expand full comment
Stephen Schiff's avatar

This is typical of the way in which politicians and business people deal with each other. As a further example I cite the 2008 financial crisis. For over 60 years the banks lobbyed to undo the regulations imposed in response to the malfeasance that led to the 1929 stock market crash and the great depression. They largely succeeded during the administration of Bill Clinton.

After the 2008 disaster, the dons of the financial community, Jaime Dimon, Lloyd Blankfein and others, testified before Congress. They had the gall to say that if Congress didn't want them to do what they did, then Congress should have enacted laws to prevent it.

In response to the crisis, the Bush and Obama administrations responded by pumping hundreds of $Billions into the banks, some of which was promptly disbursed as bonuses to the executives. Nobody went to jail, and the feeble attempt at regulation was once again diluted to meaningless by a Congress and Presidents beholding to the financial industry.

The postscript to Gary's story is the President Biden instituted a blue ribbon panel on AI, with AI industry executives as members. No scientists, mathematicians, ethicists or consumer representatives. Guess how that has worked out!

Expand full comment
Jonah's avatar

I think most of these AI CEOs are deeply self-deluded. They have convinced themselves that not only is general artificial intelligence an imminent inevitability, but that only their efforts can guide it toward a positive outcome—which of course happens to align with their short-term financial success.

If any of that is untrue, then the negative possibilities outnumber the positive: if the first is incorrect, then the proliferation of shoddy or malicious AI-generated experiences at the expense of most humans and for little benefit seems likely, while if the second part is incorrect, then exploitation of or harm to humans by AI, exploitation of humans by other humans enabled by AI, or exploitation of sapient AI all seem like possible ends.

Whatever they may say or have said, the behavior of these individuals gives the lie to any suggestion that they believe in or care about about these possibilities.

Expand full comment
Fukitol's avatar

Like every other company ever, OpenAI favors regulation - if it harms their competition more than it harms them. Now that they have robust international competition, that effectively means they want no regulation unless it harms international competitors, which means they disfavor national and state/provincial regulation. This shouldn't come as a surprise to anyone. It's how corporate involvement in regulation works.

The harms are already here. AI threat was always that AI would be stupid and used in stupid ways. If you want maximally harmful technology what you do is make the tech unreliable and easy to use badly, while being difficult to use for productive ends. Because this happens to correlate to the easiest and fastest way to make nearly any technology, it's what you get by default. Consequences further amplified when the game is being first to market to win the network effect.

We programmers and engineers know how to make things safe and reliable. It is difficult, time consuming, and not reliably cost controllable to do so, but it can be done. The bosses know how sell things. When these two things come into conflict, unless there is existential liability (either business ending or personal on the part of executives) sales always wins. Cutting corners and "testing in production" is the default in consumer software because there is no liability whatsoever. LLMs are software. It was always going to go this way.

Expand full comment
James Horton's avatar

Ironic that China seems to be taking the regulation of AI more seriously than the US.

Expand full comment
Claude COULOMBE's avatar

Altman's actions are hypocritical and manipulative. I have never believed in his sincerity. Altman is a hype machine. Every day, he makes a sensational statement, most often just to provoke, and the clickbait and social media revel in all his bullshit.

Now he's using the Chinese scarecrow and national security to justify immoral behavior and despoil artists and creators all around the world. Psychologists speak of a "projection phenomenon," which involves transferring unethical behavior onto another person, who can then be criticized. However, moral requirements are independent of the actions of others. Just because my neighbor cheats on his taxes doesn't mean I have to do it. History will remind us of Altman's pettiness.

Expand full comment
Alex Tolley's avatar

I think that if small models based on knowledge domains can prove useful and profitable, whilst the hyperscalers stay as loss-making entities, then much of demands of Altman et all will be obviously bogus and maybe, just maybe, we can get Congress to act responsibly - when the GOP is pushed out of the control of Congress. It should start with strong privacy laws and effective laws that make certain uses of AI felonies with strong penalties. Copyright should be enforced (but term reduced), and use should be opt-in and not opt-out. Social media needs Section 230, but there should be minimum federally imposed levels of protection from harm that must be observed. [Just as gun owners are liable for allowing firearms to be easily used by unauthorized people to commit crimes.] Allowing industries to write their own rules and legislation is beyond even regulatory capture. OTOH, this new "Abundance" credo, which has good aims, looks like it is just a hidden Libertarian ideology to block opposition to development, especially environmental degradation. This is just another plank to allow those with wealth and power to do what they like, including selling powerful AI tools to the wealthy and powerful.

Expand full comment
alwayscurious's avatar

Altman reminds me of Bill Gates.

Expand full comment
Martin Machacek's avatar

We will see in couple years. I think that Bill Gates redeemed himself with his philanthropy which is IMHO meaningful and largely positive.

Expand full comment
alwayscurious's avatar

I meant that they are both possibly problematic for society, lacking a regular conscience and an exceeding need for control.

Expand full comment
jibal jibal's avatar

Well, you're mistaken about Gates, but that does seem to apply to Altman.

Expand full comment
Matt Kolbuc's avatar

It's imperative open source leverages the AI revolution against big tech, just as big tech is trying to leverage it against society en-masse. AI assistants are coming and having one will soon be as mandatory as having a smart phone is today. These AI assistants must not come from big tech's data centers whom will require your daily life streamed into them, but instead, must come from an encrypted box in your own closet.

Expand full comment
Martin Machacek's avatar

It is unfortunately not technically possible at any meaningful scale (i.e. available to enough people to make a difference). Sure you can run many small to mid-size models at home on a compute that fits modest budget. The quality of output is though not going to match commercial services and it is simply not worth the effort for wast majority of people. I though agree that giving more personal and private information to big tech is bad and should be avoided.

Expand full comment
Will Peterson's avatar

Movie script idea: a duel between a cyborg sent from the future by humanity to destroy Sam Altman and another more advanced Skynet cyborg sent to save him on behalf of the machine.

Expand full comment