41 Comments
Mar 1Liked by Gary Marcus

I have a lot of moral issues with Elon Musk, separately, that are part of a different discussion--but a broken clock can be right twice a day.

People focused on the general intelligence bit are missing the real damage it is doing *right now* to privacy, job security, copyright, and human creativity. I think pointing out that openAI has strayed far from its original non profit structure, and what that means— could address multiple concerns.

Expand full comment

It is only inevitable that AI will be hugely disruptive, no matter how "ethically" they go about it. This lawsuit can't change any of those.

Only the copyright lawsuits may have merit, and even there, likely wholesale prohibition against any copyrighted materials in training looks unlikely.

Expand full comment

Disruptive is a bad word. It is disingenuous to even use that word tbh. If we take a look at history, jobs were never really completely extinguished, they were transformed. This is more than "disruption", it's a completely obliteration of a job, you don't need to code anymore, you don't like to do art anymore, you don't have to direct movies anymore, you don't have to put effort at all, so why do we exist then ? No effort into anything bc whatever you own, you actually don't get to keep it private bc the moment it gets uploaded to some social media site (which you're uploading for your friends), your life will be trained on a model my multiple organization without your permission to get you to be addicted to the device you literally paid for to own. So no, this is exploitation & it's beyond dangerous to steal data from literally everywhere where you don't have any permission for.

Expand full comment

Technology isn't taking jobs. If society wanted, it could easily have a 0% unemployment rate. These are political and economic decisions. There are tons of jobs out there that need doing.

Much of the public services in America is underfunded and understaffed and people are blaming job automation on technology. They might want to ask themselves why they have these beliefs.

And yes you still need to code, or write pseudo code. The job of coding will become more and more about giving complex instructions in natural language to these LLMs. That's productivity growth, it isn't job replacement.

Expand full comment

AI will not replace artists any time soon if ever. AI stuff is considered cheap. People like people stuff.

As to AI taking other jobs, it will be a slow process, and we will adapt. There is so much more we could do if we had more automated help.

Expand full comment

An important ethical and societal issue may arise when, with the hypothesis of constantly improving AI, someday in the future, AI-driven tools will be able not only to replace clerks who are laboriously collecting data but also to replace the senior analysts who are interpreting data. So when higher level intellectual jobs will be concerned.

Expand full comment

Yes, automation is coming after higher-level jobs. The process is more gradual than people think but it will happen. Hopefully we'll have enough time to figure out the proper balance.

Expand full comment

If this argument was made in the 1960s, i would've agreed. "People like people stuff" today doesn't work, even the best of people will "take money over people if it means they can buy a home and go on holiday trips"

Expand full comment

I am not saying people will choose people stuff out of the goodness of their heart.

We still have store cashiers, coffee baristas, massage people, even if machines can do the jobs. People prefer service from people. It feels you get more value that way.

Expand full comment

Nope. But which company has cared about people? Very rare to find a company with that motto. It's profit at all costs.

Expand full comment
Mar 1Liked by Gary Marcus

Something funny to consider is that the plaintiffs (Musk) are arguing that GPT-4 is an AGI, and could make a somewhat compelling case to a judge that it is an AGI on the grounds that it is smarter than the median human by showing its performance on various exams like the SAT.

We might wind up with an amusing hypothetical where OpenAI has to explain to a judge that GPT-4 isn't actually smarter than the median human at all, contrary to the hype. Their defense could very well consist of numerous examples of bizarre hallucinations as they try to explain to a judge (who isn't particularly tech-savvy) that it doesn't even think. It's just next-token prediction based on large quantities of training data. "Your honor, we're nowhere even close to AGI!"

If didn't know any better I'd think this is some elaborate 5D chess move by Elon Musk to troll OpenAI into admitting in a courtroom that they are full of shit, which might be considered "misleading investors" by the SEC.

Expand full comment

That's actually quite fascinating take. I hope this happens on the open coart with the entire thing live streamed. Would love that.

Expand full comment

OpenAI could maybe escape by (correctly) claiming AGI is not on the cards...

Expand full comment

I don't think so. Sam Altman's public statements have been loose so far, the man has no control and no restraint & that's for everyone to see. And all those statements about AGI he's made are all on public records, so I don't think they'll get away with this easily

Expand full comment

Open AI to Xi Jinping might not be a good idea.

Expand full comment

Because the US is so benevolent??? At what point do people realize there are no boundaries to effects of AI enhanced social media or climate change. Relationships are by definition two way.

Expand full comment

Great point tbh

Expand full comment

I don't think Musk has a case. OpenAI has a non-profit branch and a for-profit one. If their lawyers are any good, and if Microsoft is not a total fool, they should have done their homework beforehand in insulating the two.

As to Musk suing people, he's just Musk. The biggest nut and mad genius the world has got.

Expand full comment
founding

Wow! Elon Musk complaining about Altman not being "consistently candid". That is chutzpah.

Also, tip for all you AI pessimists. Make a bundle by shorting Nvidia.

Expand full comment
Mar 1·edited Mar 1

I agree Musk has a point.

Over time it seems you have minimized the chances that AGI might soon be in a position to "destroy every human in the universe". I wonder if you are shifting to see that as a more plausible threat? (I do see it as a plausible threat beginning in the 2-3 decade timeframe)

Expand full comment
author

2-3 decades are hard to project. i can’t really speak with confidence 30 years out. not unreasonable to raise the question.

Expand full comment

I would suggest that if eradication of humanity was at all plausible in as short at 20 years, that we should not be downplaying that risk. but instead should be freaked the F*** out. Not because we know it will happen, but because we knew it was plausible.

from your writings I am not sure what you will think of my argument for plausibility but I will provide it.

Multi-headed transformers with positionally encoded inputs allowed us to bootstrap arbitrarily deep domain theories into colossal networks. In one giant step this gives these systems something like Kahneman's Type-I reasoning. Entirely unconscious, but very capable of being creating in similar ways to how humans are creative on the 250 millisecond timescales.

I think the jump to type-II reasoning will likely will be similarly discrete. Just like human's I think machine Type-II reasoning will occur as modulated repeated use of type-I reasoning. I don't mean to imply current systems can do this, or that we know how do bootstrap type-II reasoning. Only that we will probably figure it out, and it will probably be, at-it-core, a very tiny "seed" algorithm like transformers are. So I agree that performance on type-II reasoning is "faked" today as you love to show all of us. But this says nothing about how the next jump will look, any more than performance pre-transformers gave evidence about how post-transformers would perform.... it did not even give us a SHADOW of what we commonly see today!

Telling the world loudly how far we are in performance terms, misleads the world in how far we are chronologically.

I think we are playing with a fire so much brighter than any humanity has ever touched.... indeed it is the last fire that we will touch. I just don't know how it all goes down, or if there is any realistic pathway thru...

Expand full comment

As long as capitalism exists, threat will be more more serious. Before money was just a mode of exchange, now everyone wants money, YouTubers, tiktokers, only fans etc. all describe the direction in which we're going. Everyone will go to any length to just get that more more click on social media, the dollar meter ticks. 3 decades is too long. I'd be surprised if we last decade and a half from now

Expand full comment

Well accumulation of wealth has been around for a long time, but yes it makes things much worse in this area.

And perhaps 2-3 decades is too long. But I am really talking about the complete extinction of the human race. so even 2-3 decades is a pretty frightening thought.

Expand full comment

He mentioned this in 2014 tbh. Nobody listened. He said that ppl have no idea as to what's going on in silicon valley. Absolutely no idea.

Expand full comment

Good question

Expand full comment

In a week of (IMHO) bleak legal news finally some "good" legal news. Here's hoping the the trial will make it to Court TV (is Court TV still a thing?). Then the only real question will be what flavor of popcorn.

Expand full comment

“[d]evelopment of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen... but are unlikely to destroy every human in the universe in the way that SMI could."

The bigger threat is actually whack philosophies (in Silicon Valley heads) like Longertmism, which this sounds like an echo of. “The mind is the lever that moves the world.” ... and this is mental.

https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo

Expand full comment

Humans have graced this planet for a long time, we've lived without bulbs, bombs, tech and lived happily bc we cherished what conversations we had, we cherished the letters we wrote, we cherished the travels we made, we cherished the art we saw/made, we cherished the movies/music we heard over the radio, we enjoyed the moon landing on TV screens that barely displayed anything (regardless of whether it was real or not). All this happened before modern technology.

The Internet had piracy already, but most of us usually just pirate for ourselves, but for the first time in human history, a company went all out in pursuit of money/greed/capitalism in replacing core human attributes, Art/music/movies-film/right to information, this is the core of our existence, without this, we are remarkably useless & they'll make money from not have workers, more and more jobs will be extinguished, bc the sooner new jobs are created, the quicker they'll be automated by AI. Either all of us will be minimum wage workers or we'll be breaking stones like slaves bc that's the gufing class that corpos want. We're heading straight to Cyberpunk 2077.

Expand full comment

I am starting to have doubts about having open AI models floating around would be a good idea. Given that there is a proliferation of fraudulent GPTs (https://www.wsj.com/articles/welcome-to-the-era-of-badgpts-a104afa8) because of the initial openness, we can expect bad actors to become increasingly empowered if we keep AI open. What is Elon thinking?

Expand full comment

I don't think genAI was even a good thing. It's infact making kids more dumber & rampant capitalism is on the rise & this is the peak of how trillions of data were gathered to replace the people who worked on them.

Expand full comment

I am doubtful about Elon Musk acting for the sake of all mankind. I would rather suppose he would like his share of the potential benefits from AI recent and future developments. Or to keep some control over these developments. Open AI actually deviated from its original purposes. There is perhaps a need for a new, nonprofit, ethically guided, funded out of the stock market, global company working on AI. A company (or organization) which will aim at safety, reliability and intellectual property protection and which will provide free access and open source AI products all over the world.

Expand full comment

He's kept his word so far. He's always done what's he's said he will. Despite the state of Twitter, you're free to do literally anything on the platform, nobody will be blocked for saying "the wrong thing". The features Twitter has now is probably the best it's ever been, apart from the bots.

Expand full comment
Mar 2·edited Mar 2

I am not at all enthusiastic about the deregulation of Twitter carried out by Elon Musk. And what he has done with Twitter may be a good indication of what he could do with an AI-driven system for a global audience. In the same way Musk is apparently not embarrassed with Twitter diffusing disinformation and hate speech, he will not be embarrassed with GenAI-bots which are hallucinating and deep-faking. In my opinion, he is a kind of big-tech boss who can potentially contribute to amplify the harms resulting from unsafe and unreliable GenAI systems.

Expand full comment

Fair point. I don't know what he's going to do.

Expand full comment

Also interesting that the lawsuit wants a judge to declare GPT 4 as an AGI: it requests "a judicial determination that GPT-4 constitutes Artificial General Intelligence and is thereby outside the scope of OpenAI’s license to Microsoft"

Expand full comment

No one can define this term. At best it's some kind of anthropocentric notion of being able to do certain things that humans do, but there is such vast variation in human performance that even this is vague. If the courts accede to this it will be laughable.

Expand full comment

It literally replaces 100s of fields. That should mean something. It's well on its way to bring an AGI

Expand full comment

I'm just saying you probably don't know what you mean when you say "AGI" and neither does anyone else. There's not a magic threshold. "Computers" used to be humans, until they were replaced by digital electronic computers. Google maps with automated spoken directions replaced paper maps with your passenger telling you where to turn. The search engine replaced the card catalog. Etc etc. Was any of these "AGI"? I would guess most would say not because none of these was general enough - even though "100s of fields" have now already been replaced by computing devices in this way.

Expand full comment

Exactly, but the movies have done an excellent job of portraying what it is for decades now. and that is a problem for AI companies.

Expand full comment