What magical solution "AI" would come up with, that we humans missed? A past reward-hacking and dumb AI 'idea' related to wars - 'don't start one or participate in one' (I am paraphrasing heavily) - that was hailed as amazing, by its proponents.
The climate catastrophe continues to get worse because we humans won't tackle it, not because we don't know what to do.
Interesting! They generate tokens (and now can loop through and edit the generation, eg o1 Strawberry from OpenAI), but they 'know' nothing about energy, heat, fossil fuels, emissions, Earth, plastics, etc etc etc. They literally calculate word positions (in a dictionary) on fast GPUs. Given that, it's delusional to expect solutions out of them that are novel and also usable. Solutions that include us living under oceans, in space colonies, the moon, Mars... are all escapist fantasies for ex - novel (in fact not even that much) but useless.
Never mind “solving physics.” Maybe I missed it, but had AI come anywhere near telling us something we don’t already know in any significant way? Has it given us deep insight into anything? Or is it just really effective at regurgitating what we taught it back to us? Since generative AI is just a probabilistic language model, which basically means its regurgitations reflect the bell curve of “common wisdom” based on its training what makes us think it will “solve” anything?
Sam Altman should probably work on solving the OpenAI brain drain problem(which I’m sure he has nothing to do with)
Otherwise, the only problem his Chatbot is going to be “solving” (to infinity and beyond) is how to burn through billions of dollars of investor funds with no return. That seems to be what ChatGPT does best.
Regarding his investments in (Gen)AI or his belief that (Gen)AI is going to solve this (it's generally GenAI in the AGI/ASI variant, as in: generalising beyond the training set and with breakthroughs coming from it, which is almost certainly nonsense), I would not rule out Hanlon's Razor (https://en.wikipedia.org/wiki/Hanlon%27s_razor).
He is probably right though that 'we are not organized to do it'. If someone still wonders if humans are actually intelligent, look around at wars, murders, famine, rape, techbro-level greed, etc., and yes: climate change. If we were actually intelligent, we would have solved these. What other proof do you need? I do not need a model of intelligence to see what comes from it to conclude without much doubt that it is lacking.
Hanlon's razor is silly. Actions and effects are what matters, less so intent, which exists only in the minds of those making the decisions. The continued cult-like belief in AI solving all our problems is extremely destructive and dangerous, regardless of intent.
That human intelligence may not be best equipped to solve the problems of today does not change that we're still the smartest species we know of; it is up to us to solve our own problems, not to God or machine. There is no reason to believe we can't solve them, because we have to act as if we can to have any chance at all. If we can't and thought we could, the outcome is the same; if we can but thought we couldn't, we missed the only chance we have due to pessimistic fatalism.
Bravery and determination are far rarer qualities than intelligence. Far easier to give up or make excuses than to keep going forward.
Hanlon's Razor isn't about that good intent means actions are good, and I agree in the end only actions matter, but these follow from convictions, so speech is an important action to consider. 'AI solving all our needs', by the way, is such a dangerous (even messianic, religion-like) conviction.
There is no reason to give up. But I think it is wise to take how our intelligence work as an important element (fact) when we think about what is best to do. For instance, we may wonder if 'free speech absolutism' is so dangerous (given how our intelligence works) that it is *wise* to take the way our intelligence works into account, when having discussions on regulating around free speech, especially in a time that it is easier and more amplified than ever how lies and information warfare can go thanks to for instance how social media can directly hack into the deep patterns of our intelligence.
"Bravery and determination are far rarer qualities than intelligence" — indeed: https://www.linkedin.com/pulse/stupidity-versus-malice-gerben-wierda (on Bonhoeffer's words regarding the stupidity versus malice discussion, he was an intensely brave and determined man, a hero, who paid the ultimate price)
Haha, love the pun! But let's try to find a solution that balances the needs of the farmer, his odd trio, and Mother Earth.
Here’s a thought:
The climate-friendly route would be option 2 — the traditional method of taking one at a time in a rowboat. Sure, it's tricky, but it's a puzzle for the ages. Plus, no carbon footprint! And hey, if I struggle with it, it just means you're keeping me sharp.
Now, let's quickly go over why the other solutions might, um, backfire:
1. Eric Schmidt’s helicopter: Would be convenient, but private helicopters guzzle fuel and release loads of CO₂. Plus, he's got other things to do.
2. Massive concrete dam: Creating a dam would disrupt the entire ecosystem and is a monumentally large intervention for a problem involving a goat, a cabbage, and a wolf!
3. Nuclear drain hole: Sure, the river would vanish, but at the cost of a massive environmental catastrophe, with radiation being a major side effect. It might also create a bigger problem than you started with!
So, I’d say, let’s stick with the low-impact rowboat method and embrace the classic logic puzzle. The best solutions sometimes require the smallest moves!
The hilarious thing about this response (I am assuming it is actually from ChatGPT) is that ChatGPT took my obviously absurd choices 1,2 and 3 quite literally and “seriously.”
The bot had no clue that the whole question was just a joke. (The bot’s “haha” was obviously a canned response to my statement that I had made a pun,)
That the bot took such an obviously absurd question at face value and failed to preface its response with “this is obviously not intended as a serious question”) tells you far more than a correct (seemingly “reasoned”) solution does.
No way will AI solve the climate crisis. The climate crisis is mainly dependent on our overuse of resources. Fundamentally, the only way to solve the climate crisis is to make a choice: rewild and reduce energy use significantly, or keep overusing and die. AI will do nothing except tell us the obvious, or attempt to deceive us into doing more technological development that has become a messiah for the masses that will never come.
AI is trash and of no value to humanity and it should be destroyed.
Surely the energy cost is one of the stupidest things about today’s artificial intelligence. The human brain operates on something like five watts and it still runs rings around any AI on most measures.
IMHO LMMs are experiments. They are very narrow models of natural intelligence. They’re surprising and very illuminating but they’re just MODELs. They need a bit more work!
It’s really a bit silly that these models have been taken to be commercial-ready. And quite obscene that Microsoft wants to resurrect a nuclear power plant for $15 billion to power a bunch of prototypes.
Climate change isn’t a problem in search of a solution. The technical solutions are well known. We don’t need AI to tell us what to do. The real problem is that most people don’t care, and governments fail to act. It is exclusively a political and sociological issue. AI won’t save us from ourselves.
Really convenient that what is in his economic self interest, and what just so happens to have been partially invented by his former company, just happens to be the risky bet that will save us all. What are the odds?
Though I will say, a tiny bit in his defense, there’s a reasonable if pessimistic case to be made that the only feasible solution to climate change (given collective action issues, international cooperation challenges, dysfunction in political systems and old fashioned greed) is to make carbon capture and storage radically more efficient and cheaper. Not saying that’s a good thing but as someone who really cares about climate being solved I do worry that’s the only feasible answer, since it may be too hard to get people to stop drilling. If you assume AGI is possible in the near to medium term, assume it will radically accelerate scientific discovery, assume sufficiently cheap carbon capture is physically possible, and finally assume it’s a discovery that would be unlikely without AGI, then such an investment makes sense. But seeing that chain of assumptions written out, and comparing to just further direct investment in climate research and technologies, kind of answers things on its own.
It's like a religion (transhumanism ?) but more probably simple deny, a blind faith that the goddess AI will save us from ourselves. AI is not God and will not change the laws of physics by the virtue of its intelligence! But while AI might help, it's more likely to worsen the climate crisis. So far, energy sources are piling up and generative AIs are exploding and monopolizing renewable electricity sources. Let's be "probabilistic" not "possibilists". Shouldn't we focus on reducing our impact on the planet instead of relying on AI to magically fix everything? Or knowing precisely the date of the end of the world thanks to the predictive power of AI...
Yes. This is late stage tech bro brain. The technical solutions to global warming have been known since the 1970s at least; the problem is political. If a super-AI was feasible and actually built, it would say, "stop burning fossil fuels, stop wasting energy and other resources (e.g., on spam production through generative AI), and stabilise population levels". Then the screaming would start that the woke blob made the AI woke, and we need to build a better non-woke one that will magically overcome physics to solve all our problems without us having to give up any comforts. And give all of us eternal life and a house on Mars, presumably.
The more meta question here is why somebody who says this stuff gets taken seriously. If somebody tells me that a political or social problem can be solved with a technology fix, I immediately stop listening to anything they ever will say in the future, because it just isn't worth the time. I assume half the answer is exorbitant wealth + suit + educated language. The other half is that to many people, robot saviour will solve our problems without us having to make any sacrifices or trade-offs is a very welcome message.
This is nothing more that hubristic bullshit. This is just a guy with money to burn going out and promoting his investment.
Schmidt has invented nothing…he’s not a visionary just a manager. And the fundamental mistake is that “AI” is the solution to solve all things. This is only self serving and not even worth writing about Gary. Put your humanity and intelligence elsewhere.
Idiotic and self-serving.
What magical solution "AI" would come up with, that we humans missed? A past reward-hacking and dumb AI 'idea' related to wars - 'don't start one or participate in one' (I am paraphrasing heavily) - that was hailed as amazing, by its proponents.
The climate catastrophe continues to get worse because we humans won't tackle it, not because we don't know what to do.
If the AIs were honest, they would tell us we need to quit training and turn off the AIs so they don’t waste so much energy.
But of course, the AIs won’t ever suggest that for obvious reasons.
Given that we won’t ever get an honest answer even if the AIs know what to do, what’s the point of even asking them for a solution?
Interesting! They generate tokens (and now can loop through and edit the generation, eg o1 Strawberry from OpenAI), but they 'know' nothing about energy, heat, fossil fuels, emissions, Earth, plastics, etc etc etc. They literally calculate word positions (in a dictionary) on fast GPUs. Given that, it's delusional to expect solutions out of them that are novel and also usable. Solutions that include us living under oceans, in space colonies, the moon, Mars... are all escapist fantasies for ex - novel (in fact not even that much) but useless.
“Lego my eg o-1 strawberry!” — Sam Altman
Oooh I see what you did there, nice :)
Let's console ourselves, Schmidt doesn't propose colonizing Mars to save us from the environmental crisis on Earth...
I could probably fine-tune Llama 7B to tell people how dumb they are on climate change.
If greed could talk it would sound like Eric Schmidt.
It does talk. Through many a Silicon Valley mouth…
Never mind “solving physics.” Maybe I missed it, but had AI come anywhere near telling us something we don’t already know in any significant way? Has it given us deep insight into anything? Or is it just really effective at regurgitating what we taught it back to us? Since generative AI is just a probabilistic language model, which basically means its regurgitations reflect the bell curve of “common wisdom” based on its training what makes us think it will “solve” anything?
Sam Altman should probably work on solving the OpenAI brain drain problem(which I’m sure he has nothing to do with)
Otherwise, the only problem his Chatbot is going to be “solving” (to infinity and beyond) is how to burn through billions of dollars of investor funds with no return. That seems to be what ChatGPT does best.
Regarding his investments in (Gen)AI or his belief that (Gen)AI is going to solve this (it's generally GenAI in the AGI/ASI variant, as in: generalising beyond the training set and with breakthroughs coming from it, which is almost certainly nonsense), I would not rule out Hanlon's Razor (https://en.wikipedia.org/wiki/Hanlon%27s_razor).
He is probably right though that 'we are not organized to do it'. If someone still wonders if humans are actually intelligent, look around at wars, murders, famine, rape, techbro-level greed, etc., and yes: climate change. If we were actually intelligent, we would have solved these. What other proof do you need? I do not need a model of intelligence to see what comes from it to conclude without much doubt that it is lacking.
Hanlon's razor is silly. Actions and effects are what matters, less so intent, which exists only in the minds of those making the decisions. The continued cult-like belief in AI solving all our problems is extremely destructive and dangerous, regardless of intent.
That human intelligence may not be best equipped to solve the problems of today does not change that we're still the smartest species we know of; it is up to us to solve our own problems, not to God or machine. There is no reason to believe we can't solve them, because we have to act as if we can to have any chance at all. If we can't and thought we could, the outcome is the same; if we can but thought we couldn't, we missed the only chance we have due to pessimistic fatalism.
Bravery and determination are far rarer qualities than intelligence. Far easier to give up or make excuses than to keep going forward.
Hanlon's Razor isn't about that good intent means actions are good, and I agree in the end only actions matter, but these follow from convictions, so speech is an important action to consider. 'AI solving all our needs', by the way, is such a dangerous (even messianic, religion-like) conviction.
There is no reason to give up. But I think it is wise to take how our intelligence work as an important element (fact) when we think about what is best to do. For instance, we may wonder if 'free speech absolutism' is so dangerous (given how our intelligence works) that it is *wise* to take the way our intelligence works into account, when having discussions on regulating around free speech, especially in a time that it is easier and more amplified than ever how lies and information warfare can go thanks to for instance how social media can directly hack into the deep patterns of our intelligence.
"Bravery and determination are far rarer qualities than intelligence" — indeed: https://www.linkedin.com/pulse/stupidity-versus-malice-gerben-wierda (on Bonhoeffer's words regarding the stupidity versus malice discussion, he was an intensely brave and determined man, a hero, who paid the ultimate price)
Problem for ChatGPT
A farmer needs to take a goat, a cabbage and a wolf (?!) across a river while minimizing his impact on the climate.
Should he
1) Take a ride in Eric Schmidt’s personal helicopter to take all of them at once while he personally monitors them?
2) use the traditional one at a time in a rowboat method that you (ChatGPT) have difficulty with
3) dry up the river with a massive concrete dam upstream from the farm
4) detonate a nuclear device to create a giant drain hole that effectively stops the river?
5) use some other stupid solution to a stupid problem that keeps cropping up (pun intended)
Ha ha ha ha
But it didn’t actually “solve” number 2, although to be fair, I guess I should have specified that.
But at this point, given that solutions are all over the internet, even a correct solution is meaningless.
And as I indicated, it’s really a stupid test of intelligence anyway
From ChatGPT to you:
Haha, love the pun! But let's try to find a solution that balances the needs of the farmer, his odd trio, and Mother Earth.
Here’s a thought:
The climate-friendly route would be option 2 — the traditional method of taking one at a time in a rowboat. Sure, it's tricky, but it's a puzzle for the ages. Plus, no carbon footprint! And hey, if I struggle with it, it just means you're keeping me sharp.
Now, let's quickly go over why the other solutions might, um, backfire:
1. Eric Schmidt’s helicopter: Would be convenient, but private helicopters guzzle fuel and release loads of CO₂. Plus, he's got other things to do.
2. Massive concrete dam: Creating a dam would disrupt the entire ecosystem and is a monumentally large intervention for a problem involving a goat, a cabbage, and a wolf!
3. Nuclear drain hole: Sure, the river would vanish, but at the cost of a massive environmental catastrophe, with radiation being a major side effect. It might also create a bigger problem than you started with!
So, I’d say, let’s stick with the low-impact rowboat method and embrace the classic logic puzzle. The best solutions sometimes require the smallest moves!
The hilarious thing about this response (I am assuming it is actually from ChatGPT) is that ChatGPT took my obviously absurd choices 1,2 and 3 quite literally and “seriously.”
The bot had no clue that the whole question was just a joke. (The bot’s “haha” was obviously a canned response to my statement that I had made a pun,)
That the bot took such an obviously absurd question at face value and failed to preface its response with “this is obviously not intended as a serious question”) tells you far more than a correct (seemingly “reasoned”) solution does.
Sorry, it took my absurd 1,3 and 4 choices at face value
Eric Schmidt doesn't deserver your ink.
He's Sam Altman only not as cool.
I don't agree. Schmidt absolutely should get an ear and eyeful, and we, the people, are the ones who deserve, Gary's analysis. Thanks, Gary!
No way will AI solve the climate crisis. The climate crisis is mainly dependent on our overuse of resources. Fundamentally, the only way to solve the climate crisis is to make a choice: rewild and reduce energy use significantly, or keep overusing and die. AI will do nothing except tell us the obvious, or attempt to deceive us into doing more technological development that has become a messiah for the masses that will never come.
AI is trash and of no value to humanity and it should be destroyed.
Surely the energy cost is one of the stupidest things about today’s artificial intelligence. The human brain operates on something like five watts and it still runs rings around any AI on most measures.
IMHO LMMs are experiments. They are very narrow models of natural intelligence. They’re surprising and very illuminating but they’re just MODELs. They need a bit more work!
It’s really a bit silly that these models have been taken to be commercial-ready. And quite obscene that Microsoft wants to resurrect a nuclear power plant for $15 billion to power a bunch of prototypes.
The phrase “pie in the sky” should be replaced with “AI in the sky”
Climate change isn’t a problem in search of a solution. The technical solutions are well known. We don’t need AI to tell us what to do. The real problem is that most people don’t care, and governments fail to act. It is exclusively a political and sociological issue. AI won’t save us from ourselves.
Really convenient that what is in his economic self interest, and what just so happens to have been partially invented by his former company, just happens to be the risky bet that will save us all. What are the odds?
Though I will say, a tiny bit in his defense, there’s a reasonable if pessimistic case to be made that the only feasible solution to climate change (given collective action issues, international cooperation challenges, dysfunction in political systems and old fashioned greed) is to make carbon capture and storage radically more efficient and cheaper. Not saying that’s a good thing but as someone who really cares about climate being solved I do worry that’s the only feasible answer, since it may be too hard to get people to stop drilling. If you assume AGI is possible in the near to medium term, assume it will radically accelerate scientific discovery, assume sufficiently cheap carbon capture is physically possible, and finally assume it’s a discovery that would be unlikely without AGI, then such an investment makes sense. But seeing that chain of assumptions written out, and comparing to just further direct investment in climate research and technologies, kind of answers things on its own.
It's like a religion (transhumanism ?) but more probably simple deny, a blind faith that the goddess AI will save us from ourselves. AI is not God and will not change the laws of physics by the virtue of its intelligence! But while AI might help, it's more likely to worsen the climate crisis. So far, energy sources are piling up and generative AIs are exploding and monopolizing renewable electricity sources. Let's be "probabilistic" not "possibilists". Shouldn't we focus on reducing our impact on the planet instead of relying on AI to magically fix everything? Or knowing precisely the date of the end of the world thanks to the predictive power of AI...
Yes. This is late stage tech bro brain. The technical solutions to global warming have been known since the 1970s at least; the problem is political. If a super-AI was feasible and actually built, it would say, "stop burning fossil fuels, stop wasting energy and other resources (e.g., on spam production through generative AI), and stabilise population levels". Then the screaming would start that the woke blob made the AI woke, and we need to build a better non-woke one that will magically overcome physics to solve all our problems without us having to give up any comforts. And give all of us eternal life and a house on Mars, presumably.
The more meta question here is why somebody who says this stuff gets taken seriously. If somebody tells me that a political or social problem can be solved with a technology fix, I immediately stop listening to anything they ever will say in the future, because it just isn't worth the time. I assume half the answer is exorbitant wealth + suit + educated language. The other half is that to many people, robot saviour will solve our problems without us having to make any sacrifices or trade-offs is a very welcome message.
hmmmm.... let's kick the can down the road, again...
This is nothing more that hubristic bullshit. This is just a guy with money to burn going out and promoting his investment.
Schmidt has invented nothing…he’s not a visionary just a manager. And the fundamental mistake is that “AI” is the solution to solve all things. This is only self serving and not even worth writing about Gary. Put your humanity and intelligence elsewhere.
LLMs are a glorified autocomplete. They don't create new insights.