I think "hypeability" is not the only thing that matters here, but actual problem-solving. As far as I've seen, GPT-4 can solve many real-world problems such as programming a Chrome extension. This doesn't sound as Humanity-advancing but has value nonetheless. Taking GPT-4 just for its "true intelligence" is not sensible.
I think "hypeability" is not the only thing that matters here, but actual problem-solving.
As far as I've seen, GPT-4 can solve many real-world problems such as programming a Chrome extension. This doesn't sound as Humanity-advancing but has value nonetheless.
Taking GPT-4 just for its "true intelligence" is not sensible.
I’d be curious to know how you view the effect this is going to have on “human programming” at large. As someone just starting out in a web development career, I know how much I’ve learned by having to think through every piece of code itself, even opting to use as little third-party tools and libraries as possible in order to first get a good sense of the underlying systems at play. If programmers start to increasingly rely on the current crop AI systems, it seems to me there’s a danger of deskilling here, especially in a premature way where we are still dealing with models that hallucinate and lack the ability to “truly reason” themselves.
To be honest, there’s a very personal feeling of loss involved here as well, since I genuinely enjoy writing code, though ultimately, of course, it’s hard to argue with the economics of the matter and I suppose the holy grail of AI research is to one day make such human work obsolete anyway.
I'm just sceptical that anyone is going to invest in the heavy lifting needed to really approach AGI - salience/relevance, composition, etc. I think the current incentive landscape simply favours short-term, low investment goals.
Absolutely. That's why AGI will need work from universities and non-profit research centers around the world, which are not worried about profitability.
I think, also, full AI will require a situation in which HUMANS are not worried about profitability, otherwise expect big anti-AI movements.
Capitalism will have to be modified somewhat: the early AI pioneers either forgot or simply handwaved away the slight problem that if a robot works as well as a human we all starve to death under the system as is.
Right, AI is not neutral to capitalism. Many of the futuristic hypotheses forget the simple fact that AI is progressing only in the measure it serves capitalist interests…
Thank you. At least one other person in the comments section knows this.
It is one of my greatest annoyances with AI that worries about rogue AI (which at the moment is worth worrying about roughly as much as is the sun becoming a red giant) seem to predominate over the employment problem (which is happening NOW).
I see exactly 4 comments, the 3 in this discussion included, out of 84 in the comments section mentioning this.
OK, rant over.
Anyway, that is something that will have to be dealt with. How do you propose dealing with it? I've thought about this for a while and keep coming back to ubi. I feel like I'm missing something.
Another issue with AI + capitalism is that business owners tend to assume it will make more profit if they remove the humans (lower input cost).
Just one small problem: where do you get profit if no one has any money?
I think "hypeability" is not the only thing that matters here, but actual problem-solving.
As far as I've seen, GPT-4 can solve many real-world problems such as programming a Chrome extension. This doesn't sound as Humanity-advancing but has value nonetheless.
Taking GPT-4 just for its "true intelligence" is not sensible.
I’d be curious to know how you view the effect this is going to have on “human programming” at large. As someone just starting out in a web development career, I know how much I’ve learned by having to think through every piece of code itself, even opting to use as little third-party tools and libraries as possible in order to first get a good sense of the underlying systems at play. If programmers start to increasingly rely on the current crop AI systems, it seems to me there’s a danger of deskilling here, especially in a premature way where we are still dealing with models that hallucinate and lack the ability to “truly reason” themselves.
To be honest, there’s a very personal feeling of loss involved here as well, since I genuinely enjoy writing code, though ultimately, of course, it’s hard to argue with the economics of the matter and I suppose the holy grail of AI research is to one day make such human work obsolete anyway.
I'm just sceptical that anyone is going to invest in the heavy lifting needed to really approach AGI - salience/relevance, composition, etc. I think the current incentive landscape simply favours short-term, low investment goals.
Absolutely. That's why AGI will need work from universities and non-profit research centers around the world, which are not worried about profitability.
I think, also, full AI will require a situation in which HUMANS are not worried about profitability, otherwise expect big anti-AI movements.
Capitalism will have to be modified somewhat: the early AI pioneers either forgot or simply handwaved away the slight problem that if a robot works as well as a human we all starve to death under the system as is.
Right, AI is not neutral to capitalism. Many of the futuristic hypotheses forget the simple fact that AI is progressing only in the measure it serves capitalist interests…
Thank you. At least one other person in the comments section knows this.
It is one of my greatest annoyances with AI that worries about rogue AI (which at the moment is worth worrying about roughly as much as is the sun becoming a red giant) seem to predominate over the employment problem (which is happening NOW).
I see exactly 4 comments, the 3 in this discussion included, out of 84 in the comments section mentioning this.
OK, rant over.
Anyway, that is something that will have to be dealt with. How do you propose dealing with it? I've thought about this for a while and keep coming back to ubi. I feel like I'm missing something.
Another issue with AI + capitalism is that business owners tend to assume it will make more profit if they remove the humans (lower input cost).
Just one small problem: where do you get profit if no one has any money?