On this I wholeheartedly agree with Elon, and you Gary. Incidentally, the key to all of this work many of us are doing is in your post scriptum:
"Gary Marcus doesn’t play favorites; he calls them like he sees them. When a LeCun or Musk gets things wrong he says so; if they get them right, he says that, too. Our choices in AI should be about ideas and values, rather than personalities."
Amen to that! If only the mainstream media could finally understand this...
Any proprietary tech that becomes open source is very, very, very good for the world.
I became a technical lead working on Fortune 1000 companies' projects because I had access to open source code, esp formerly proprietary software that became freely available in modifiable source code form. With my deepened knowledge via accelerated learning from these software repositories, I could deliver the systems that met the high and difficult expectations of my clients.
Contrast this to a place of economic helplessness where as a young graphic design graduate, majoring in (wait for it) children's book illustration, I struggled to earn a decent income. Nothing wrong with my education - but it wasn't easy to be the next Charles M Schulz or Axel Scheffler. Open source (and various tech bubbles) gave me new opportunities.
As a Christian, I thank God for the work of Richard Stallman and the Electronic Frontier Foundation, they made free software a tangible good and something to reasonably expect as normal and beneficial. So yes, if OpenAI LLMs can be released with an OSD compatible license, it would be a good day for humanity. I'm confident that OpenAI and MSFT would also benefit immensely from this act of goodwill.
In the past open-source has been clearly to the good, but it is certainly possible to imagine future open-source AI in the hands of bad actors doing harm. I see nothing to rule that out, and increasing negative externalities from current AI. So I don’t think it is as simple as you suggest.
I agree. It's not known if threat actors won't be significantly armed by open sourcing GPT3.5+.
But if OpenAI scraped the public internet, how is it any different to criminal groups doing the same (scraping)? Maybe the more problematic question is this: did OpenAI have access to more dangerous, possibly classified documents (or dark web equivalents)? What about politically embarrassing troves of data? Would more eyeballs on the source code help uncover these, if such training did happen? (chicken and egg situation)
(And if we did find, say, hypersonic missile assembly instructions or evidence of a juicy but damaging scandal (or both) in a pre-release examination, are they grounds to shut OpenAI down?)
And, have we asked similar difficult questions with research-oriented toxic open source LLMs like HateBERT and earlier, released versions of Llama2 (which weren't as sanitised)? What about the "safe" LLMs? Are there any dangerous bits in Mistral etc, both open and closed source versions? The biggest Falcon LLM has 180 billion params trained on over 3 trillion tokens - how does a test suite for enormous LLMs in the future look like?
How do we intercept output for prompt injection attacks by malicious actors that may not have been publicly disclosed?
Not just about honor. Wes Roth pointed out that Musk in his suit pointed out that allowing OpenAI non-profit status is analogous to allowing one team in a basketball game to double each point made.
Isn't it really about Musk using OpenAI's shifting goals to keep them from dominating AI, an area Musk would very much like to colonize. He woke up one morning full of OpenAI envy/hate and realized he had a legal way to throw sand in their eyes. When it comes to Musk, it is pretty much all about personal ambition. Even his recent alignment with MAGA is because he hates DEI and COVID shutdowns as, in his eyes, they hurt his companies.
Yes but even back in 2014, his reason for scaring people about AI I suspect was self-serving. Tesla's first Autopilot release was in 2015 so he was heavily into it then and had multiple competitors in the field. His motivation could have been as simple as just suppressing competing AI efforts. Or it could have been part of his longtermism. When it comes to Musk's public statements, I just don't trust them to be honestly reasoned. He's a smart guy, just not an honest smart guy.
To ensure that AI is indeed used for the benefits of humanity, I think that at some point world governments would have to consider creating a global AI research project and regulatory agency. Something like the ITER fusion reactor demonstrator project and the IAEA (International Atomic Energy Agency). This will accelerate the development of real AI (not the current fakery) and make sure that automation benefits all members of society, not just a bunch of tech bros.
It could be a very vulnerable body, though, prone to corruption, manipulation and even weaponisation for war purposes of sorts, not just the military type.
The change from 'digital intelligence' to 'AGI' in the mission is also noticeable. It opens up unrestrained moneymaking. After all, OpenAI's current mission is to ensure "AGI benefits humanity". Given that what they in reality *do* produce isn't AGI at all, they are free from the 'benefiting humanity' requirement for what they *do* produce. Brrr.
this is a five minute video on artificial intelligence and creativity everyone should watch....just finished your book and I use plenty of example like you do on AI understanding instead of learning...I call it AI wisdom...has to be developed!
Thank you for the "and deployment" part. ML is just about making use of DL as a technology, and even Hintons original goal of trying to understand the brain using xNN's as a model seems to have been a failure. The "leaders" in the field are high on their own next funding round hype.
But they _did_ open source some code! Let's not be rash! A _tiny_ sliver of *GPT-2* counts as open source (not even the full 40Gb GPT-2 model, but that's, uh, _semantics_ 🤯), does it not?
And, from their blog: "Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights." (https://openai.com/research/better-language-models) Given that much more powerful models have been open sourced and easily available from HuggingFace and other public sources, is this a modern take on the fable where the dog starves the horse by denying it the hay?
Might cancel my streaming TV subscription - _this_ drama is more compelling... 😂
Reasonably irrational to assume there's a profitless path to AGI
What type of empirical and or replicable/theoretical framework/publication has Marcus produced of late, such that he would be able to help act as a verifier? 🤔🤔🤔
They did a deal with the Devil, believing that their Charter would protect them, but in reality they failed to understand how power works: whoever controls the money, controls the project.
Do think AGI is achievable based on LLMs? Human GI is based in consciousness, which is based in the sensory system, which has its closest analog in the immune system. Consciousness which is not conscious OF something doesn't even make sense. LLMs are not conscious of anything. They sit inert until asked a question.
It would seem that until we create an electronic analog of the immune system, let it accumulate memories in the world, then give it a sensory system and a reason to exist, a felt purpose, then let it experience the world, we will never be any closer to AGI. AGI would have to be experiencing the world, that is conscious, for it to BE anything at all.
How can digital technology even begin to do that? LLMs are no closer to AGI than the stone I just kicked. As for OpenAI being "close" to it, are they 5 years out, like self-driving cars? I will believe it when it falls in love, or becomes enraptured by music.
Are thermostats self-aware? Parameciums? Insects? My cat seems to possess some kind of self-awareness. Lizards have agency, according to Michael Tomasello.
So if I had a general purpose robot capable of, say, correctly responding to commands like, "do the dishes," "wash the clothes," sweep the driveway" that would be wonderful, but self aware? Not buying it. Just a glorified Roomba.
If it could serve as a nanny to my children and they had fun and developed attachment, OK, then I suppose I'd believe. We'd have to talk though, me and the nanny.
I recall there was a guy who fell in love with an LLM, or something. But the LLM can't love him back. He's just a lonely geek.
Perhaps one day robots will be raised and socialized as children. That might do it. I don't see it happening with ones and zeros though. But hey! Keep it coming. This is my favorite subject.
On this I wholeheartedly agree with Elon, and you Gary. Incidentally, the key to all of this work many of us are doing is in your post scriptum:
"Gary Marcus doesn’t play favorites; he calls them like he sees them. When a LeCun or Musk gets things wrong he says so; if they get them right, he says that, too. Our choices in AI should be about ideas and values, rather than personalities."
Amen to that! If only the mainstream media could finally understand this...
I just really hope you will be called as the expert witness and we all get to watch it live.
Would be a huge moral dilemma!
If OpenAI were forced to open source GPT-4 and future advances as a consequence of this suit would that be good for the world? Maybe, maybe not.
Any proprietary tech that becomes open source is very, very, very good for the world.
I became a technical lead working on Fortune 1000 companies' projects because I had access to open source code, esp formerly proprietary software that became freely available in modifiable source code form. With my deepened knowledge via accelerated learning from these software repositories, I could deliver the systems that met the high and difficult expectations of my clients.
Contrast this to a place of economic helplessness where as a young graphic design graduate, majoring in (wait for it) children's book illustration, I struggled to earn a decent income. Nothing wrong with my education - but it wasn't easy to be the next Charles M Schulz or Axel Scheffler. Open source (and various tech bubbles) gave me new opportunities.
As a Christian, I thank God for the work of Richard Stallman and the Electronic Frontier Foundation, they made free software a tangible good and something to reasonably expect as normal and beneficial. So yes, if OpenAI LLMs can be released with an OSD compatible license, it would be a good day for humanity. I'm confident that OpenAI and MSFT would also benefit immensely from this act of goodwill.
In the past open-source has been clearly to the good, but it is certainly possible to imagine future open-source AI in the hands of bad actors doing harm. I see nothing to rule that out, and increasing negative externalities from current AI. So I don’t think it is as simple as you suggest.
I agree. It's not known if threat actors won't be significantly armed by open sourcing GPT3.5+.
But if OpenAI scraped the public internet, how is it any different to criminal groups doing the same (scraping)? Maybe the more problematic question is this: did OpenAI have access to more dangerous, possibly classified documents (or dark web equivalents)? What about politically embarrassing troves of data? Would more eyeballs on the source code help uncover these, if such training did happen? (chicken and egg situation)
(And if we did find, say, hypersonic missile assembly instructions or evidence of a juicy but damaging scandal (or both) in a pre-release examination, are they grounds to shut OpenAI down?)
And, have we asked similar difficult questions with research-oriented toxic open source LLMs like HateBERT and earlier, released versions of Llama2 (which weren't as sanitised)? What about the "safe" LLMs? Are there any dangerous bits in Mistral etc, both open and closed source versions? The biggest Falcon LLM has 180 billion params trained on over 3 trillion tokens - how does a test suite for enormous LLMs in the future look like?
How do we intercept output for prompt injection attacks by malicious actors that may not have been publicly disclosed?
Not just about honor. Wes Roth pointed out that Musk in his suit pointed out that allowing OpenAI non-profit status is analogous to allowing one team in a basketball game to double each point made.
Isn't it really about Musk using OpenAI's shifting goals to keep them from dominating AI, an area Musk would very much like to colonize. He woke up one morning full of OpenAI envy/hate and realized he had a legal way to throw sand in their eyes. When it comes to Musk, it is pretty much all about personal ambition. Even his recent alignment with MAGA is because he hates DEI and COVID shutdowns as, in his eyes, they hurt his companies.
I address this near the end of the essay
Yes but even back in 2014, his reason for scaring people about AI I suspect was self-serving. Tesla's first Autopilot release was in 2015 so he was heavily into it then and had multiple competitors in the field. His motivation could have been as simple as just suppressing competing AI efforts. Or it could have been part of his longtermism. When it comes to Musk's public statements, I just don't trust them to be honestly reasoned. He's a smart guy, just not an honest smart guy.
To ensure that AI is indeed used for the benefits of humanity, I think that at some point world governments would have to consider creating a global AI research project and regulatory agency. Something like the ITER fusion reactor demonstrator project and the IAEA (International Atomic Energy Agency). This will accelerate the development of real AI (not the current fakery) and make sure that automation benefits all members of society, not just a bunch of tech bros.
It could be a very vulnerable body, though, prone to corruption, manipulation and even weaponisation for war purposes of sorts, not just the military type.
The change from 'digital intelligence' to 'AGI' in the mission is also noticeable. It opens up unrestrained moneymaking. After all, OpenAI's current mission is to ensure "AGI benefits humanity". Given that what they in reality *do* produce isn't AGI at all, they are free from the 'benefiting humanity' requirement for what they *do* produce. Brrr.
this is a five minute video on artificial intelligence and creativity everyone should watch....just finished your book and I use plenty of example like you do on AI understanding instead of learning...I call it AI wisdom...has to be developed!
https://aeon.co/videos/why-strive-stephen-fry-reads-nick-caves-letter-on-the-threat-of-computed-creativity
Lol
Thank you for the "and deployment" part. ML is just about making use of DL as a technology, and even Hintons original goal of trying to understand the brain using xNN's as a model seems to have been a failure. The "leaders" in the field are high on their own next funding round hype.
I wondering if Elon and Sam are coordinating this to bring more hype just before the GPT5 drop.
But they _did_ open source some code! Let's not be rash! A _tiny_ sliver of *GPT-2* counts as open source (not even the full 40Gb GPT-2 model, but that's, uh, _semantics_ 🤯), does it not?
https://github.com/openai/gpt-2 (5 year old codebase!)
The libraries are so old I'm not sure if they can be loaded/run properly anymore on modern Python.
https://github.com/openai/gpt-2/blob/master/requirements.txt
And, from their blog: "Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights." (https://openai.com/research/better-language-models) Given that much more powerful models have been open sourced and easily available from HuggingFace and other public sources, is this a modern take on the fable where the dog starves the horse by denying it the hay?
Might cancel my streaming TV subscription - _this_ drama is more compelling... 😂
Reasonably irrational to assume there's a profitless path to AGI
What type of empirical and or replicable/theoretical framework/publication has Marcus produced of late, such that he would be able to help act as a verifier? 🤔🤔🤔
I am blocking you for a bit. The first part was a reasonable question, the second part an ad hominem non sequitur.
The profit motive has done more to benefit humanity than any other motive.
They did a deal with the Devil, believing that their Charter would protect them, but in reality they failed to understand how power works: whoever controls the money, controls the project.
Do think AGI is achievable based on LLMs? Human GI is based in consciousness, which is based in the sensory system, which has its closest analog in the immune system. Consciousness which is not conscious OF something doesn't even make sense. LLMs are not conscious of anything. They sit inert until asked a question.
It would seem that until we create an electronic analog of the immune system, let it accumulate memories in the world, then give it a sensory system and a reason to exist, a felt purpose, then let it experience the world, we will never be any closer to AGI. AGI would have to be experiencing the world, that is conscious, for it to BE anything at all.
How can digital technology even begin to do that? LLMs are no closer to AGI than the stone I just kicked. As for OpenAI being "close" to it, are they 5 years out, like self-driving cars? I will believe it when it falls in love, or becomes enraptured by music.
Are thermostats self-aware? Parameciums? Insects? My cat seems to possess some kind of self-awareness. Lizards have agency, according to Michael Tomasello.
So if I had a general purpose robot capable of, say, correctly responding to commands like, "do the dishes," "wash the clothes," sweep the driveway" that would be wonderful, but self aware? Not buying it. Just a glorified Roomba.
If it could serve as a nanny to my children and they had fun and developed attachment, OK, then I suppose I'd believe. We'd have to talk though, me and the nanny.
I recall there was a guy who fell in love with an LLM, or something. But the LLM can't love him back. He's just a lonely geek.
Perhaps one day robots will be raised and socialized as children. That might do it. I don't see it happening with ones and zeros though. But hey! Keep it coming. This is my favorite subject.