28 Comments
Feb 29·edited Feb 29Liked by Gary Marcus

Glad to hear it. I'm dead set against it. I read somewhere like the Financial Times a couple years back, that it will eliminate 70% of the workforce. That will be fine if we planning for a different kind of economy, but we're not. How about we create an actual workforce with human beings. When/if we nail that we can move onto the next step. As far as Hollywood, as a member of the WGA, we're behind on this issue as was SAG. AI is already stolen manuscripts, likenesses, etc. It may be an asset to business because they only have one interest: money. But to creatives, it's an existential threat.

Expand full comment

Chatbots and image generators cannot eliminate 70% of the workforce. The complexity of things even an average office worker has to handle is totally astounding.

That being said, better get used to steady encroachment in the labor market for decades to come.

Expand full comment

Correct, they can’t.

But executive teams especially in public companies will have none of it. They need to maintain the appearance of profitability in the face of yet another bleak financial year. I have spoken with a couple of them, it’s do or die.

Expand full comment

We covered some of their larger macro problems in our latest podcast. In addition to ethics issues, they have major product and marketing challenges, specifically with the attention approach which has created a hype bubble, and their lack of understanding about who their true customer is (hint, it's not the general public in spite of their tsunami approach to PR).

The crossing the chasm moment for ChatGPT will only carry them for so long. Sooner or later the repeated and frequent missteps will drag them down. In hindsight, the Altman firing looks pretty smart. Too bad investors did not see it as so, too.

Expand full comment
Mar 6Liked by Gary Marcus

Meanwhile, Mistral is quietly wolfing down OpenAI’s lunch (MSFT and Snowflake).

Expand full comment

And now a potential SEC investigation of Sam Altman over his less than candid management style!

Expand full comment

I would much rather have read about a real "problem" for AI, namely that the majority of society realizes that AI's only true purpose is to further meaningless consumerism of media isolated from humanity, that the entire world loses interest in it, and that all AI companies become destitute. Or at least that some hacker broke into OpenAI and deleted every copy of all their code and all their research as well.

Because all these problems at OpenAI only mean that AI might be developed by someone else, or take an extra few years to become as good as people think, or some other such thing. But they really have no bearing on how AI will develop in the long-term in our society. So while we can entertain ourselves endlessly over these developments, what exactly is the point?

Expand full comment

Observing all the entertainment industry pundits proclaiming that Sora will be the "death of Hollywood" is amusing to me. Tyler Perry's remarks last week, quoted all over the trades, struck me as a knee-jerk FUD reaction. Yet it's clear when you drill into it with the pundits that they really don't understand either a) the requirements of a motion picture or television series, or b) the technical, logistical, legal and sociocultural issues that come with a mass wholesale adoption of generative AI.

Expand full comment
author

i think it will be great for short little B roll clips in music videos, but it is nowhere to usable for main footage in feature films. won’t be for a long time, i suspect. too many continuity errors, physics violations etc. it will find a use, but not like Perry is imagining.

Expand full comment

I'm in agreement with you. I'm a member of the International Cinematographers Guild here in Vancouver and work in the industry. I've been having this debate with a variety of colleagues here in recent weeks. Thank you, Gary, for bringing some much needed sanity to all the hyperbole surrounding these AI developments. Honestly, I'd love to see you give a talk to entertainment industry stakeholders here in Vancouver; perhaps offer a little more perspective than they seem to be getting inside their various bubbles.

Expand full comment

For a certain narrow genre of filmmaking, prolific editing should suffice. If you can't dazzle them with brilliance, misdirect them with montage.

The Burroughs/Gysin cut-up/fold-in technique probably has some relevant application in that regard. https://brightlightsfilm.com/appraisal-films-william-burroughs-brion-gysin-anthony-balch-terms-recent-avant-garde-theory/

I'm more of a Terence Malick fan, myself. Good luck with AI coming up with anything like Tree Of Life. But someone out there might be able to pick up on what I'm getting at, with entertaining results.

Expand full comment

OpenAI is still in the lead, and it will take time for competitors to translate the smaller gap with GPT 4 into actually getting market share.

The most important question is where the tech is going next. Likely that's not video, which looks like a side amusement at this stage.

What is needed is to improve the reliability of chatbots. Here likely Google is the company to beat, as they have deep expertise when it comes to techniques that could augment language models.

Expand full comment

It appears that Gemini is having the same problems as ChatGPT so far

https://www.racket.news/p/i-wrote-what-googles-ai-powered-libel

Expand full comment

Yes. It took Google less than a year to catch up only. Fixing the reliability will not happen in 1 year. It will be incremental work.

Expand full comment

Reads like a forced pessimism to me. Blows minds but fails at basic physics and biology? Come on dawg, no need to act like there is anything that even comes close to Sora in video gen ai space right now

Expand full comment
Mar 6Liked by Gary Marcus

Hmm. Hard to call looking at something objectively forced pessimism. Imperfect illustration, but crack too is mind blowing (literally) but fails at just about everything else. If I criticise it, am I just sour grapes?

Sora’s videos are super cool, I agree. But as a video agency founder (and previous special effects supervisor), I would be very amused if I/my team had to correct clients’ future Sora footage containing biological and physics errors. It might also turn out to be very expensive for them 😂

Expand full comment

History always repeats. Especially when you’re not paying attention. Gary, you’re just pointing out the icebergs. But like usual, the hype is blinding. So I hope people on board the hype-ship have lifeboats.

Expand full comment

They did explain the model update, they pushed an update that was incompatible with some GPU's

Andrej Karpathy took to twitter to explain why he left.

The average Marvel movie viewer wants their mind blown and is happy with Captain America and the Hulk.

Also did anyone see the EMO paper? Temporally stable facial animation, lip sync and singing from a static image: https://youtu.be/f_d-8BGIzPI?si=NltWr-4mYewrW3hf

Expand full comment

"New results from Subbarao Kambhapati cast doubts on the robustness of chain of thought prompting" — can you provide a link?

Expand full comment

Yeah, it looks that LLM cannot do more than "creatively" improvise given similar-enough examples. That is not surprising. There's nothing in the max-likelihood training which suggests it could learn algorithmic thinking.

Which means it will need gazillion examples of how people solve specific problems.

This is still not a bad approach if the goal is an assistant to help with simple tasks. Often times the same kinds of problems show up again and again, with limited variation, but which was enough to confuse any methods before this.

Then, hopefully LLM can generate some code which can then be passed to smarter tools, which know algorithms, etc.

Expand full comment

On the lawsuits:

I think the courts, as this point, are going to try to bypass the issue as much as possible by treating the way that AI uses copyrighted material as permissible under fair use concepts - finding that AI does "enough" of a transformation so that the output can't be treated as an attempt to substitute for the copyright holder's original work. OpenAI's attack on the NY Times for paying someone to "hack" into the software in order to reveal the copyrighted material suggests that's exactly the direction they'd like this to go - if the original material is sufficiently well-protected so that an ordinary user can't get to it by normal use of the software, shouldn't that be sufficient to demonstrate transformation? I doubt that the judges want to open up that box - they'd much rather wait and see if Congress will do anything to modify the DMCA.

Expand full comment

Unfortunately, often such “transformation” is negligible, even nonexistent. This is very evident when contrasted alongside LLMs that are trained on royalty free material from the get go, as I can demonstrate very clearly.

https://www.linkedin.com/posts/zingrevenue_digitalmarketers-contentcreators-largelanguagemodels-activity-7162683101821722626-XX7T

Expand full comment

We have to remember, too, that "transformative use" goes to only one of the tests to determine whether a particular use of a copyrighted work is fair use. See 17 USC 107 (link below). I called out the transformative use argument because I think it's probably OpenAI's strongest defense - certainly in the Copilot lawsuit, for example, the 2021 Google case vs Oracle is close to being on-point. But I'm not a lawyer; I'm just trying to read the tea leaves from the various news articles that are floating around, and my spidey sense tells me that the courts are going to be very reluctant to jump into this one.

https://www.law.cornell.edu/uscode/text/17/107.html

Expand full comment
Mar 6·edited Mar 6

Thanks Mike, good points.

But this is too important and serious for the judicial system to ignore.

The “APIs” in the Google case were limited to textual characters (Java package and class names); technical remedies emerged as a result.

In the NYT lawsuit we’re talking about subtle and brazen plagiarism of the entire internet, text, pictures, video, audio and beyond (web games, legacy multimedia, who knows), without attribution nor compensation. Technical remedies are impossibly impractical, the defendants would have to start from scratch.

A useful analogy would be justifying reselling items from a robbery, even “repurposing” them (like a gold sculpture refashioned from a stolen family heirloom, melted and sculpted), with distinctive marks still on the new product.

And with the defendant solely focused on commercial goals, it’s hard to convince the jury of the “good of humanity” angle.

Time to parachute in Judge William Alsup? 😊

Expand full comment

I understand your concern. My perspective is that the courts have to decide whether *as a matter of law* the actions taken by OpenAI, Google, etc. meet the standards for fair use of protected material.

I will note that plagiarism by itself is not against the law. Unethical, yes; illegal, not unless there is a copyright or trademark violation involved. The plaintiffs have to prove not only that the material was taken without attribution, but that it was taken in violation of copyright/trademark law, and that the plaintiffs have been harmed by that violation.

Expand full comment
Mar 12·edited Mar 12

Thanks Mike - not sure how much more blatant the examples have to get in order to demonstrate how serious ripping off others is.

https://www.linkedin.com/feed/update/urn:li:activity:7162038460122423296

And how is then charging for LLM API use not harming copyright holders?

Not that different from a Potrero Hill, CA bar owner justifying illegally streaming the Super Bowl on ten huge screens paired with deafening speakers while claiming that "everybody does it in San Francisco, Your Honour" (while quietly jacking up the food and drink prices).

https://www.findlaw.com/legalblogs/small-business/can-you-get-sued-for-showing-the-super-bowl/

Expand full comment

Hope the copyright and Intellectual Property Infringement suits reach into OpenAI's Board of Directors, personally and collectively. They utterly failed to prevent management from illegal acts and benefiting from illegal acts.

Expand full comment