Seems like AI development is becoming more about passing standard tests than tackling the hard problems of intelligence. Hacks that create a hypeable and sellable product are what's favoured.
I think "hypeability" is not the only thing that matters here, but actual problem-solving.
As far as I've seen, GPT-4 can solve many real-world problems such as programming a Chrome extension. This doesn't sound as Humanity-advancing but has value nonetheless.
Taking GPT-4 just for its "true intelligence" is not sensible.
I’d be curious to know how you view the effect this is going to have on “human programming” at large. As someone just starting out in a web development career, I know how much I’ve learned by having to think through every piece of code itself, even opting to use as little third-party tools and libraries as possible in order to first get a good sense of the underlying systems at play. If programmers start to increasingly rely on the current crop AI systems, it seems to me there’s a danger of deskilling here, especially in a premature way where we are still dealing with models that hallucinate and lack the ability to “truly reason” themselves.
To be honest, there’s a very personal feeling of loss involved here as well, since I genuinely enjoy writing code, though ultimately, of course, it’s hard to argue with the economics of the matter and I suppose the holy grail of AI research is to one day make such human work obsolete anyway.
I'm just sceptical that anyone is going to invest in the heavy lifting needed to really approach AGI - salience/relevance, composition, etc. I think the current incentive landscape simply favours short-term, low investment goals.
Absolutely. That's why AGI will need work from universities and non-profit research centers around the world, which are not worried about profitability.
I think, also, full AI will require a situation in which HUMANS are not worried about profitability, otherwise expect big anti-AI movements.
Capitalism will have to be modified somewhat: the early AI pioneers either forgot or simply handwaved away the slight problem that if a robot works as well as a human we all starve to death under the system as is.
Right, AI is not neutral to capitalism. Many of the futuristic hypotheses forget the simple fact that AI is progressing only in the measure it serves capitalist interests…
Thank you. At least one other person in the comments section knows this.
It is one of my greatest annoyances with AI that worries about rogue AI (which at the moment is worth worrying about roughly as much as is the sun becoming a red giant) seem to predominate over the employment problem (which is happening NOW).
I see exactly 4 comments, the 3 in this discussion included, out of 84 in the comments section mentioning this.
OK, rant over.
Anyway, that is something that will have to be dealt with. How do you propose dealing with it? I've thought about this for a while and keep coming back to ubi. I feel like I'm missing something.
Another issue with AI + capitalism is that business owners tend to assume it will make more profit if they remove the humans (lower input cost).
Just one small problem: where do you get profit if no one has any money?
AI researcher Yannic Kilcher pointed out that standardized tests assume that the person taking the test is good at the things humans are typically good at. They are meant to compare humans to other humans in areas with high variance. They exist, in short, to measure things that humans are typically not so good at.
That an AI is good at things humans are often bad at is noteworthy, but it isn't new. It also doesn't say much about whether they're good at things humans take for granted, which is the main problem for LLMs in this day and age.
If think about this in problem domains. Not all problems solved by humans are in the LLM solution set. But what I can tell you is that there is a trend of more and more models( LLM, DNN, RNN) that solve human problems. The trend in our models is converging to AGI. But this just as trend, not guaranteeing future returns. Will we hit a wall? Sure, but in the problems we now can solve were only solved by humans before. These new models just expand the domain of problems computers can solve. The question is: to our current economy, what percentage of problems is now on the LLM bucket?
I don't think that next token prediction is sufficiently sophisticated to bring about the emergence of the intelligent behaviour we're after. LLMs might be a good fundamental way of compressing a lot of textual data, but it has to be coupled with a different paradigm. GPT4 is still unable to generate any novel hypotheses about anything despite having been trained on virtually all the knowledge available. It is unable to admit its ignorance and will always be confident about falsehoods. At this stage, it seems crazy to believe that adding more and more parameters and throwing more and more compute at it will magically cause the qualitative leap to true intelligence that we still haven't seen. A newborn baby exhibits a deeper intelligence in some sense than GPT4, by being able to meta-learn, that is, continually upgrade its own learning algorithm. GPT4 is following the same trend that most publication in ML/AI is nowadays, add more layers, parameters, spin up more GPU instances and get your 1% relative improvement on some ancient and unrepresentative benchmark. We really need to start getting more creative than that.
I think, in order to believe that our currently trajectory with these kinds of AIs will lead to AGI, you have to believe that intelligence will simply emerge once we have enough processing power. That all problem-solving comes back to one type of problem-solver.
With cog neuro constantly advancing, it seems to me like that's getting less and less likely.
Yeah, I'm not too opposed to the idea of all problem solving coming back to one type of problem solver or that there is an elegant algorithm for AGI that is yet to be discovered. What I just do know is that the algorithm behind GPT3/4 is not that.
To me, it's like we don't really understand how humans problem-solve. Except it's something to do with rapidly being able to discount non-productive avenues of investigation, and we don't know how the brain does this. And then it's like we think we can just data-crunch our way through as an alternative. I mean, it's worth a try but I'm not hopeful.
I think the best thing that might come from the whole AI debacle is that we realise that the human brain is somehow doing some pretty amazing stuff and we need to study it more.
Lots of opinions in this comment thread, no references to current research or arguments beyond your individual perception. Honestly, I don't think we have studied these *transformer* models (LLM will soon be a misnomer or is already) for long enough after the have multimodal capabilities. It seems to improve generalization (anyone expecting perfect performance at this stage in the game may be jumping the gun), see Google's new PaLM-e for instance. Don't discount transformer models yet. Not saying a future algorithm might not do the job better, but I sincerely think this is the right track. With most new technologies, when introduced, people often dismiss the first publicly hyped products, only for the product to mature and do exactly what was advertised on the tin a few years later. I recognize this is somewhat different, but I still see a similar thing going on here. The differences between a human mind and an LLM, and the way it learns, seem to me to not be indicative of a lack of generalization or abstraction but merely an effect of single-modality inputs and lack of real world experience. More of a lack of quality rather than a lack of intellectual quality than substance. I think it is a false assumption that the current limits of the current technology indicates a "brick wall" rather than a continued progression. I don't see evidence of a "brick wall", but more of a viral public doubt solely springing up as a counter-effect of the wave of AI optimism. So far, none of the "Transformer models can't have real intelligence, it's just statistics" arguments have been any more convincing than the continuing results seen from parameter scaling, as predicted.
I cannot proof either. I'm not an AI expert, or cognitive psychologist. But pulling from the only other advance intelligence in this planet I can pull some hypothesis.
Seems like AI development is becoming more about passing standard tests than tackling the hard problems of intelligence.
Hacks that create a hypeable and sellable product are what's favoured.
I think "hypeability" is not the only thing that matters here, but actual problem-solving.
As far as I've seen, GPT-4 can solve many real-world problems such as programming a Chrome extension. This doesn't sound as Humanity-advancing but has value nonetheless.
Taking GPT-4 just for its "true intelligence" is not sensible.
I’d be curious to know how you view the effect this is going to have on “human programming” at large. As someone just starting out in a web development career, I know how much I’ve learned by having to think through every piece of code itself, even opting to use as little third-party tools and libraries as possible in order to first get a good sense of the underlying systems at play. If programmers start to increasingly rely on the current crop AI systems, it seems to me there’s a danger of deskilling here, especially in a premature way where we are still dealing with models that hallucinate and lack the ability to “truly reason” themselves.
To be honest, there’s a very personal feeling of loss involved here as well, since I genuinely enjoy writing code, though ultimately, of course, it’s hard to argue with the economics of the matter and I suppose the holy grail of AI research is to one day make such human work obsolete anyway.
I'm just sceptical that anyone is going to invest in the heavy lifting needed to really approach AGI - salience/relevance, composition, etc. I think the current incentive landscape simply favours short-term, low investment goals.
Absolutely. That's why AGI will need work from universities and non-profit research centers around the world, which are not worried about profitability.
I think, also, full AI will require a situation in which HUMANS are not worried about profitability, otherwise expect big anti-AI movements.
Capitalism will have to be modified somewhat: the early AI pioneers either forgot or simply handwaved away the slight problem that if a robot works as well as a human we all starve to death under the system as is.
Right, AI is not neutral to capitalism. Many of the futuristic hypotheses forget the simple fact that AI is progressing only in the measure it serves capitalist interests…
Thank you. At least one other person in the comments section knows this.
It is one of my greatest annoyances with AI that worries about rogue AI (which at the moment is worth worrying about roughly as much as is the sun becoming a red giant) seem to predominate over the employment problem (which is happening NOW).
I see exactly 4 comments, the 3 in this discussion included, out of 84 in the comments section mentioning this.
OK, rant over.
Anyway, that is something that will have to be dealt with. How do you propose dealing with it? I've thought about this for a while and keep coming back to ubi. I feel like I'm missing something.
Another issue with AI + capitalism is that business owners tend to assume it will make more profit if they remove the humans (lower input cost).
Just one small problem: where do you get profit if no one has any money?
One is an intermittent step for the other one. Just like humans, LLMs have to go to highschool and college before they can work on their pHDs.
AI researcher Yannic Kilcher pointed out that standardized tests assume that the person taking the test is good at the things humans are typically good at. They are meant to compare humans to other humans in areas with high variance. They exist, in short, to measure things that humans are typically not so good at.
That an AI is good at things humans are often bad at is noteworthy, but it isn't new. It also doesn't say much about whether they're good at things humans take for granted, which is the main problem for LLMs in this day and age.
If think about this in problem domains. Not all problems solved by humans are in the LLM solution set. But what I can tell you is that there is a trend of more and more models( LLM, DNN, RNN) that solve human problems. The trend in our models is converging to AGI. But this just as trend, not guaranteeing future returns. Will we hit a wall? Sure, but in the problems we now can solve were only solved by humans before. These new models just expand the domain of problems computers can solve. The question is: to our current economy, what percentage of problems is now on the LLM bucket?
I don't think that next token prediction is sufficiently sophisticated to bring about the emergence of the intelligent behaviour we're after. LLMs might be a good fundamental way of compressing a lot of textual data, but it has to be coupled with a different paradigm. GPT4 is still unable to generate any novel hypotheses about anything despite having been trained on virtually all the knowledge available. It is unable to admit its ignorance and will always be confident about falsehoods. At this stage, it seems crazy to believe that adding more and more parameters and throwing more and more compute at it will magically cause the qualitative leap to true intelligence that we still haven't seen. A newborn baby exhibits a deeper intelligence in some sense than GPT4, by being able to meta-learn, that is, continually upgrade its own learning algorithm. GPT4 is following the same trend that most publication in ML/AI is nowadays, add more layers, parameters, spin up more GPU instances and get your 1% relative improvement on some ancient and unrepresentative benchmark. We really need to start getting more creative than that.
I think, in order to believe that our currently trajectory with these kinds of AIs will lead to AGI, you have to believe that intelligence will simply emerge once we have enough processing power. That all problem-solving comes back to one type of problem-solver.
With cog neuro constantly advancing, it seems to me like that's getting less and less likely.
Yeah, I'm not too opposed to the idea of all problem solving coming back to one type of problem solver or that there is an elegant algorithm for AGI that is yet to be discovered. What I just do know is that the algorithm behind GPT3/4 is not that.
To me, it's like we don't really understand how humans problem-solve. Except it's something to do with rapidly being able to discount non-productive avenues of investigation, and we don't know how the brain does this. And then it's like we think we can just data-crunch our way through as an alternative. I mean, it's worth a try but I'm not hopeful.
I think the best thing that might come from the whole AI debacle is that we realise that the human brain is somehow doing some pretty amazing stuff and we need to study it more.
Lots of opinions in this comment thread, no references to current research or arguments beyond your individual perception. Honestly, I don't think we have studied these *transformer* models (LLM will soon be a misnomer or is already) for long enough after the have multimodal capabilities. It seems to improve generalization (anyone expecting perfect performance at this stage in the game may be jumping the gun), see Google's new PaLM-e for instance. Don't discount transformer models yet. Not saying a future algorithm might not do the job better, but I sincerely think this is the right track. With most new technologies, when introduced, people often dismiss the first publicly hyped products, only for the product to mature and do exactly what was advertised on the tin a few years later. I recognize this is somewhat different, but I still see a similar thing going on here. The differences between a human mind and an LLM, and the way it learns, seem to me to not be indicative of a lack of generalization or abstraction but merely an effect of single-modality inputs and lack of real world experience. More of a lack of quality rather than a lack of intellectual quality than substance. I think it is a false assumption that the current limits of the current technology indicates a "brick wall" rather than a continued progression. I don't see evidence of a "brick wall", but more of a viral public doubt solely springing up as a counter-effect of the wave of AI optimism. So far, none of the "Transformer models can't have real intelligence, it's just statistics" arguments have been any more convincing than the continuing results seen from parameter scaling, as predicted.
Opponent processing!
You're saying that learning to fake maturity is a necessary step to attaining maturity? I don't buy that.
I cannot proof either. I'm not an AI expert, or cognitive psychologist. But pulling from the only other advance intelligence in this planet I can pull some hypothesis.