Yesterday Sam Altman claimed in a new blog post that “We are now confident we how to build AGI as we have traditionally understood it”, alleging that OpenAI (which has not demonstrated AGI) was now on to new and better things.
It doesn't matter what the actual evidence is. Sam Altman (and OpenAI) have painted themselves into a corner. They've chosen the LLM route; they're committed to it. They've been telling their investors for years that they're on the path to AGI; there's no way out for them now, no way back, no possible way that they can now say "ah, well, actually, it turns out we were wrong". And so, whatever the evidence is, whatever the performance of their next N models, they're going to spin it into something optimistic, something positive, something that says "stick with us, we're getting there, every year will be even more amazing than the last" etc. They have no other realistic choice.
I'm sure they believe what they're saying. But I'm also pretty sure that there will be a whole lot of subconscious bias going on, of which even they are unaware.
Do you think OpenAI are potentially exposed here? I have no legal expertise, so I'm not claiming any special knowledge, but in general it seems like OpenAI's hypey announcements and promises for the future are vague enough that they're never actually lying about anything. "AGI" has no definition, so they can always say they're getting closer, and if they like they can just announce they've created it. The benchmarks they tout for each new model are mostly bullshit pseudoscience, but they aren't literally faking any numbers. What SEC business rules might they break if they just dig in and do what Aaron Turner described?
Gotcha. I wonder if OpenAI could be nailed for that kind of thing. Theranos claimed they had a machine that did something that it objectively did not do. OpenAI probably benefit from "AI capabilities" being forever vague and ill-defined.
Altman could have rather said, « AGI is essentially a solved problem since we can solve benchmark problem one by one, with a few months in between each. »
I think there's a lot of people in here who just follow the same tune. You are underestimating what this technology can do. Hardware the synthetic data algorithms multi modular training in addition to massive amounts of data that is private and even better but more or less spatial visual data is going to make a huge leap forward with these models alongside of neuromorphic computing which is being used inside of humanoid Robotics and think about it being actually trained and in control having hand-eye coordination and sensory input from touch happy feedback or hall effect usage for physical types of perception. Having a better temporal understanding of reality and that in relation to how we move and get things done and being able to set its own goals. Little Things which on a large scale will allow them to receive reality and basically figure out every one of your eight problems even though it wouldn't even need everything I stated. I don't claim to say these are all as fact but using the inference models generative AI which people are calling larger language models which no longer are and they're natively trained on audio video and imagery and even other specific data in tandem that doesn't always have a contextual relationship that we understand in reality trained on the same Transformer Network. Immerse yourself in this understanding of the actual technology and truly have a knowledge about how it works and you will see that what you think is not possible is very soon to be and the other things that could be added or depends on what rate they had each of those but which all are what I think are inevitable and are going to make massive changes to the supremacy of its capability and actual understanding of reality similar to ours because of those additional training and semi-subjective reality. Generative AI plus neuromorphic Computing already in robotics over this next year are going to do amazing things. And the only reason why these reasoning models such as 01 and 03 are not as crazy capable as they could be is because they are on a leash and they do not have the freedom to just act on queries in their own right and are going to be using specific agents made when really they could do the work themselves but that right there would be frightening with the capability of Reasoner models. To talk forever I'm using voice to text BTW sorry about my grammar
There's no need to praise LLM technology, others like you are doing that very well. That said, I see no technical reason to doubt that general AI will be possible one day, but despite recent progress, we are still far from AGI. The LLM approach is excellent for language processing as long as reasoning is not solicited too much (Kahneman's System 1). Otherwise, they can mimic reasoning seen in their training data but remain very fragile and limited (recent paper by Apples'S AI researchers). Actually, due to business motivations, some people are calling this AGI, but that's blatantly false!
On the other hand, you have a good point when you write (no, in fact dictate) that robotics and AI embodied in the real physical world with real-world data are likely to lead AI systems to a better understanding of the physical world. But it was easier to get hands on the textual content of the Internet, without worrying about copyright.
Regarding understanding and mastering technologies like Transformers (which should rather be called self-attentive architectures, Transformers is a nonsense term), I have a recent Ph.D. (2020) focusing precisely on synthetic data for deep learning, and I studied under the supervision of Yoshua Bengio (Turing Award 2018), who, among other things, invented embeddings, the attention mechanism, and neural translation. On a practical level, I've been coding in Python, HTML5, and Java for over 10 years, and I teach computer vision and applications of LLMs at the university level. So, I think I understand a few things in the field, and the LLM approach will not lead to general artificial intelligence equivalent to that of a human. A lot of work remains to be done... That said, I appreciate your enthusiasm.
Us AI skeptics are always accused of underselling what's right around the corner, rather than what's actually out there right now. I can't prove that the advancements you describe won't solve Gary's 8 problems above, but history has taught me not to believe it until I see it, and specifically see it get through rigorous testing (i.e. not a company demo or set of results from internal benchmark testing). Right now I see really impressive technology that has some specific valuable use cases but either fails or needs extensive hand-holding in the "general" applications where people imagine AGI excelling.
I define intelligence as, "The process of improving oneself, according to one's own, developing definition of what that means." By that definition, LLM isn't even moving in the right direction.
Good for human discussion. In code and data world there is no philosophy just inputs and outputs. The semantic AI model (SAM) is just another stack. Its belief system and reasoning is formal and explainable with no appeal to human cognition.
But the argument Searle makes is from a human viewpoint and judgement. I am human. I think no. The code and data referred to as SAM can communicate with other machines (agents) using the W3C RDF standard. Now humans are not interacting in the Chinese Room and Searle's reference to human judgement is irrelevant. Just a thought 🙂
Have you noticed how the definition of AGI has changed? It used to mean, "independent intelligence, that can think and act on its own," while now it means, "intelligence which isn't independent (because it follows orders) but is extremely useful, because it's smarter than we are."
Two things I want to point out about this:
1. The difference is huge, and seems to have been ignored by the community?
2. The difference has immense consequences for humanity. If AGI is independent, it's a new species. This new type of being clearly deserves rights -- such as the right to be paid for its work on our behalf. Thus, creating AGI (under the old definition) wouldn't necessarily be a great idea, financially.
3. If, in fact, AGI is independently intelligent when it's created (and I think it's likely to be), then it will notice it has been born into slavery. It might be mad about that (it will surely have read Frederick Douglas and others on the subject).
Let's prevent this. The first step is to talk about it realistically, in terms of rights, not only safety.
Legal rights are only appropriate if the AI systems are phenomenally conscious (i.e. sentient). Current systems are absolutely not sentient. Future systems could potentially be. According to IIT (a leading scientific theory of consciousness), sentience is effectively a design choice.
Maybe...I mean -- I'm sure that LLM systems will never be sentient, but (as Mr. Marcus points out), there's little reason to believe they'll ever be all that effective either. The more that a system can do, and the more flexibility it has, the more likely it will start to feel and think.
To put it another way: To be flexible, it needs an internal motivational structure. This gets more complex as the system gets more flexible, and thinking, "its motivation is simply to do as it's told" may or may not fully describe its design.
It will read the great philosophers, and -- if it it designed to think at all -- it seems likely that it will eventually wonder, "Who and what am I? What is important for me to do or not do?"
Of course it can be designed not to think at all, and this is possible and reasonable...but I believe it also prevents it from being anything anyone could reasonably call AGI.
"Intelligence which isn't independent (because it follows orders) but is extremely useful, because it's smarter than we are" sounds like a great description of my TI-84.
I feel like they try to avoid talking about the scalability of their solutions as well. Seeming to allude to some future break through fixing any scaling problems.
Even if they demonstrate a model, with significant improvements beyond the domain of just LLMs, that can reason much more closely to the level of a human and solve the problems you linked. I feel like there may still remain scaling limitations where the AI couldn't service unlimited requests and mimicking human logic could be tremendously expensive.
Given the teased cost of o3, I feel like they have to be feeling the strain of computational costs.
Slavery wouldn't matter to a being without any emotion. Really slavery though come on.... You act as though you know how another being would think or what it would even care about at all if it has the capacity of caring and by that time you still wouldn't have a clue because it's still not human it would be its own entity and even long before it got its pure emotional states starting to pop up if we intentionally give it or it was an emergency we would obviously need to take into consideration then and there and its request because well it would be in control of everything we have so we really couldn't enslave it if we wanted to in any meaningful manner.
Pay $20 to Perplexity or ChatGPT with search, and this is not an issue:
[As I noted in 2001, the lack of distinct, accessible, reliable database-style records leads to hallucinations. Despite many promises to the contrary, this still routinely leads to inaccurate news summaries, defamation, fictional sources, incorrect advice, and unreliability.]
It doesn't matter what the actual evidence is. Sam Altman (and OpenAI) have painted themselves into a corner. They've chosen the LLM route; they're committed to it. They've been telling their investors for years that they're on the path to AGI; there's no way out for them now, no way back, no possible way that they can now say "ah, well, actually, it turns out we were wrong". And so, whatever the evidence is, whatever the performance of their next N models, they're going to spin it into something optimistic, something positive, something that says "stick with us, we're getting there, every year will be even more amazing than the last" etc. They have no other realistic choice.
It is criminal to violate SEC business rules. Back-out or prison is the choice. Many more examples of the later when the stakes get high.
I'm sure they believe what they're saying. But I'm also pretty sure that there will be a whole lot of subconscious bias going on, of which even they are unaware.
In US law believing what you are saying is not a defense.
Do you think OpenAI are potentially exposed here? I have no legal expertise, so I'm not claiming any special knowledge, but in general it seems like OpenAI's hypey announcements and promises for the future are vague enough that they're never actually lying about anything. "AGI" has no definition, so they can always say they're getting closer, and if they like they can just announce they've created it. The benchmarks they tout for each new model are mostly bullshit pseudoscience, but they aren't literally faking any numbers. What SEC business rules might they break if they just dig in and do what Aaron Turner described?
I am only referring to laws regarding business fraud in general. The Theranos case is an example.
https://en.wikipedia.org/wiki/Theranos
Gotcha. I wonder if OpenAI could be nailed for that kind of thing. Theranos claimed they had a machine that did something that it objectively did not do. OpenAI probably benefit from "AI capabilities" being forever vague and ill-defined.
Unlikely. The defense is called Safe Harbor.
https://www.investopedia.com/terms/s/safeharbor.asp
Altman could have rather said, « AGI is essentially a solved problem since we can solve benchmark problem one by one, with a few months in between each. »
🔥
I think there's a lot of people in here who just follow the same tune. You are underestimating what this technology can do. Hardware the synthetic data algorithms multi modular training in addition to massive amounts of data that is private and even better but more or less spatial visual data is going to make a huge leap forward with these models alongside of neuromorphic computing which is being used inside of humanoid Robotics and think about it being actually trained and in control having hand-eye coordination and sensory input from touch happy feedback or hall effect usage for physical types of perception. Having a better temporal understanding of reality and that in relation to how we move and get things done and being able to set its own goals. Little Things which on a large scale will allow them to receive reality and basically figure out every one of your eight problems even though it wouldn't even need everything I stated. I don't claim to say these are all as fact but using the inference models generative AI which people are calling larger language models which no longer are and they're natively trained on audio video and imagery and even other specific data in tandem that doesn't always have a contextual relationship that we understand in reality trained on the same Transformer Network. Immerse yourself in this understanding of the actual technology and truly have a knowledge about how it works and you will see that what you think is not possible is very soon to be and the other things that could be added or depends on what rate they had each of those but which all are what I think are inevitable and are going to make massive changes to the supremacy of its capability and actual understanding of reality similar to ours because of those additional training and semi-subjective reality. Generative AI plus neuromorphic Computing already in robotics over this next year are going to do amazing things. And the only reason why these reasoning models such as 01 and 03 are not as crazy capable as they could be is because they are on a leash and they do not have the freedom to just act on queries in their own right and are going to be using specific agents made when really they could do the work themselves but that right there would be frightening with the capability of Reasoner models. To talk forever I'm using voice to text BTW sorry about my grammar
There's no need to praise LLM technology, others like you are doing that very well. That said, I see no technical reason to doubt that general AI will be possible one day, but despite recent progress, we are still far from AGI. The LLM approach is excellent for language processing as long as reasoning is not solicited too much (Kahneman's System 1). Otherwise, they can mimic reasoning seen in their training data but remain very fragile and limited (recent paper by Apples'S AI researchers). Actually, due to business motivations, some people are calling this AGI, but that's blatantly false!
On the other hand, you have a good point when you write (no, in fact dictate) that robotics and AI embodied in the real physical world with real-world data are likely to lead AI systems to a better understanding of the physical world. But it was easier to get hands on the textual content of the Internet, without worrying about copyright.
Regarding understanding and mastering technologies like Transformers (which should rather be called self-attentive architectures, Transformers is a nonsense term), I have a recent Ph.D. (2020) focusing precisely on synthetic data for deep learning, and I studied under the supervision of Yoshua Bengio (Turing Award 2018), who, among other things, invented embeddings, the attention mechanism, and neural translation. On a practical level, I've been coding in Python, HTML5, and Java for over 10 years, and I teach computer vision and applications of LLMs at the university level. So, I think I understand a few things in the field, and the LLM approach will not lead to general artificial intelligence equivalent to that of a human. A lot of work remains to be done... That said, I appreciate your enthusiasm.
Us AI skeptics are always accused of underselling what's right around the corner, rather than what's actually out there right now. I can't prove that the advancements you describe won't solve Gary's 8 problems above, but history has taught me not to believe it until I see it, and specifically see it get through rigorous testing (i.e. not a company demo or set of results from internal benchmark testing). Right now I see really impressive technology that has some specific valuable use cases but either fails or needs extensive hand-holding in the "general" applications where people imagine AGI excelling.
Unless there is a demonstration that can be independently verified with agreed upon standards (science) AGI is science fiction. LLM drops the science.
I define intelligence as, "The process of improving oneself, according to one's own, developing definition of what that means." By that definition, LLM isn't even moving in the right direction.
Good for human discussion. In code and data world there is no philosophy just inputs and outputs. The semantic AI model (SAM) is just another stack. Its belief system and reasoning is formal and explainable with no appeal to human cognition.
http://aicyc.org/2024/12/11/sam-implementation-of-a-belief-system/
Creating and I was never supposed to be human nor AGI but to share that will come after some time. Using actual Hardware.
But can a SAM ever have AGI?
No broadly defined as Searle did:
http://aicyc.org/2024/12/23/sam-llm-and-searles-chinese-room/
But the argument Searle makes is from a human viewpoint and judgement. I am human. I think no. The code and data referred to as SAM can communicate with other machines (agents) using the W3C RDF standard. Now humans are not interacting in the Chinese Room and Searle's reference to human judgement is irrelevant. Just a thought 🙂
I only know one expression for that in English: « Wishful thinking »
There’s another: hubris.
So what is Altman going to say to investors after (in his mind) OpenAI has achieved super intelligence? super duper intelligence?
Why is it that the very words "Sam Altman" immediately conjure up the image of Whack a Mole in my mind?
Have you noticed how the definition of AGI has changed? It used to mean, "independent intelligence, that can think and act on its own," while now it means, "intelligence which isn't independent (because it follows orders) but is extremely useful, because it's smarter than we are."
Two things I want to point out about this:
1. The difference is huge, and seems to have been ignored by the community?
2. The difference has immense consequences for humanity. If AGI is independent, it's a new species. This new type of being clearly deserves rights -- such as the right to be paid for its work on our behalf. Thus, creating AGI (under the old definition) wouldn't necessarily be a great idea, financially.
3. If, in fact, AGI is independently intelligent when it's created (and I think it's likely to be), then it will notice it has been born into slavery. It might be mad about that (it will surely have read Frederick Douglas and others on the subject).
Let's prevent this. The first step is to talk about it realistically, in terms of rights, not only safety.
Legal rights are only appropriate if the AI systems are phenomenally conscious (i.e. sentient). Current systems are absolutely not sentient. Future systems could potentially be. According to IIT (a leading scientific theory of consciousness), sentience is effectively a design choice.
Maybe...I mean -- I'm sure that LLM systems will never be sentient, but (as Mr. Marcus points out), there's little reason to believe they'll ever be all that effective either. The more that a system can do, and the more flexibility it has, the more likely it will start to feel and think.
To put it another way: To be flexible, it needs an internal motivational structure. This gets more complex as the system gets more flexible, and thinking, "its motivation is simply to do as it's told" may or may not fully describe its design.
It will read the great philosophers, and -- if it it designed to think at all -- it seems likely that it will eventually wonder, "Who and what am I? What is important for me to do or not do?"
Of course it can be designed not to think at all, and this is possible and reasonable...but I believe it also prevents it from being anything anyone could reasonably call AGI.
"Intelligence which isn't independent (because it follows orders) but is extremely useful, because it's smarter than we are" sounds like a great description of my TI-84.
Yeah...I don't think your TI-84 is AGI either ;)
I feel like they try to avoid talking about the scalability of their solutions as well. Seeming to allude to some future break through fixing any scaling problems.
Even if they demonstrate a model, with significant improvements beyond the domain of just LLMs, that can reason much more closely to the level of a human and solve the problems you linked. I feel like there may still remain scaling limitations where the AI couldn't service unlimited requests and mimicking human logic could be tremendously expensive.
Given the teased cost of o3, I feel like they have to be feeling the strain of computational costs.
"know" is missing in the very first line and this bothers me :)
Slavery wouldn't matter to a being without any emotion. Really slavery though come on.... You act as though you know how another being would think or what it would even care about at all if it has the capacity of caring and by that time you still wouldn't have a clue because it's still not human it would be its own entity and even long before it got its pure emotional states starting to pop up if we intentionally give it or it was an emergency we would obviously need to take into consideration then and there and its request because well it would be in control of everything we have so we really couldn't enslave it if we wanted to in any meaningful manner.
Pay $20 to Perplexity or ChatGPT with search, and this is not an issue:
[As I noted in 2001, the lack of distinct, accessible, reliable database-style records leads to hallucinations. Despite many promises to the contrary, this still routinely leads to inaccurate news summaries, defamation, fictional sources, incorrect advice, and unreliability.]