28 Comments
Feb 2·edited Feb 2Liked by Gary Marcus

The majority of people producing copyright material are independent artists. They will be the ones disproportionately affected by AI theft. As mentioned by Gary Marcus and in the comments below; with the well known players it's rinse and repeat - overreach, get sued, settle and do it again.

Expand full comment
Feb 2·edited Feb 2Liked by Gary Marcus

I noticed this in the Bard email today: “Using a technology called SynthID, all unique images generated on Bard will have embedded watermarking to indicate that it was created by AI. The watermark is directly added into the pixels of an AI-generated image, meaning it’s imperceptible to the human eye but it can still be detectable with SynthID. It’s important that we approach the creation of images with AI responsibly.”

So users ares stamped with a watermark on the output, but the input is taken scott-free (so far). Interesting.

"Being able to identify AI-generated content is critical to promoting trust in information. While not a silver bullet for addressing the problem of misinformation, SynthID is an early and promising technical solution to this pressing AI safety issue."

https://deepmind.google/technologies/synthid/

Hey Google – If you're so smart, how about developing a system for identifying where your inputs and outputs came from (even when relatively morphed) – some kind of end-to-end IP chain of custody technology (CoC) sorta thing? IPCoC ?... Now *that* would be cool... more cool than more Mario look-alikes ...

Expand full comment

Some big corporations are stealing the “intellectual property” of some other big corporations

I get that this is not OK on some level, but I’ve also seen little to suggest that any normal person will be harmed

Capitalist on capitalist crime. Have at it

Expand full comment

I'm not a fan of the big tech companies OR copyright law. Both cause societal problems.

However, it seems clear to me that the problem here is not the AI models, which are merely learning about the world from looking at freely available content (this is fair use IMO).

If someone generates and distributes objectionable content (of any kind, whether it's copyright violation or a deepfake nude) then the problem is the person who created and distributed that objectionable content, NOT the tool they used (which could be AI, or Photoshop, a photocopier or a paintbrush).

Expand full comment
Feb 2·edited Feb 2

OpenAI could take high risks because they are a startup and went in first.

By now, Google had enough time to ponder things. I don't think they stepped in unprepared.

Lawyers will have their day in the Sun. I am betting, however, that Google will come on top, by paying up, if necessary (a small fraction of their profits).

Expand full comment

Well, come on, they have to stay competitive in the copyright violation technology race! 😆

Expand full comment

Hopefully artists will be able to use tools like nightshade that will make stealing their images counter-productive.

https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/

Expand full comment

I thought that AlphaBet was a betting and gambling company…

Expand full comment

"Gary Marcus is standing firmly by his prediction that 2024 will be the year of Generative AI litigation."

Translation:

"Gary Marcus is hoping to capitalize on AI hysteria along every conceivable vector, and to cheerlead capturable regulatory bodies to advance his company's absurd goals no matter how detrimental to human freedom they obviously are."

Now: everyone resume their praise of this moral idiot, who answers no questions no matter how generously they're asked.

Expand full comment

A thought occurs... Given "Better Call GPT" (https://arxiv.org/abs/2401.16212), how likely is it that we're going to see LLMs used to facilitate litigation against themselves...? :-)

Expand full comment