It’s also concerning that the data centres popping up everywhere to prop all this up come at such a high cost in terms of power and in particular, water. The race to profit is causing water insecurity for communities. This is not progress.
I’ve seen conversations inside the AI community around the ethics of AI in terms of its usage, but almost nothing about the ethics of the infrastructure that supports it all. The cost to society is higher than a potential financial collapse.
There is nothing about this that is "progress". We don't get high-speed rail, affordable, rigorous higher education, complete health care, clean and beautiful cities, good nutritious food, or anything else *we want to have*. Instead, psychopaths in our government in collusion with psychopaths in Silicon Valley have decided that they get to decide our future for us, one where we're irrelevant, politically impotent, impoverished, etc.,.
Well you just described one of the keys issues here, because what you're describing is the classic problem solving approach every freshman engineering student has drilled into their head. But, many of the people in SV aren't actually engineers in the literal sense, despite their incessant overuse of the title - and Jensen Huang as an EE defies this analysis somewhat. Insofar as I can detect such thinking by the SV Psychopaths (that should be the name of a baseball team in Palo Alto or San Jose), they've identified humanity and humanness as the problem to be solved. Fatigue is a problem. Compensation is a problem. Quality of Life is a problem. Physical Labor is a problem. They see all of the pieces of being a human and engaging in human activity as contemptible, while lying to us when they say that they want to free us to do the “meaningful things”. They see AGI as their triumph over our inherent contemptibility and as I see it, their solution to the problem of Us as a species.
I appreciate such analyses, but the fact is that we need electricity for our lights, food to eat, heat for our homes, etc. Pollution caused by these activities is not the same as pollution caused by A.I. which we *do not need*, and which does us no good.
This is a cop out. Orders of magnitude more energy is used on other things we don't need, like doom-scrolling. AI is singled out as an especially bad waste of resources despite being a negligible use of resources, and as the analysis shows, using so little resources that it arguably SAVES us resources because if we weren't using AI we'd be doing something else that is almost certainly more resource intensive.
The reason it's singled out is because of misinformation about the resource consumption, not because people understand how little resources it uses and make a clear-headed decision that it's important anyway
A copout? Project much? Your comments are sophistic and grossly intellectually dishonest. The idiotic post you linked attacks a complete strawman ... the issue isn't people using chatbots, it's on the other end -- the resource usage of the AI computers.
What about the material, financial and human resources flushed down the drain, in the pursuit of a goal, AGI, that is completely divorced from reality?
This is core to the argument that "AI is a significant environmental burden" is divorced from reality. Saying "what about these other potential problems that aren't about the environment" is not relevant to the point I'm making.
And we don't even necessarily disagree on that other topic either. I'd be happy to discuss it with you on a different thread, or once we're finished talking about the topic I raised. Are we in agreement that the environmental complaints about AI are completely divorced from reality and are a massive distraction from important water and electricity uses?
I was merely pointing out that the water and electricity used per plain text prompt of an average ChatGPT user is just a tiny fraction of the overall, including environmental, cost of AI. It completely ignores computational and economic resources that go into training, CoT/reasoning, agents, video and image generation. If you want to claim that one can spend on the order of a trillion $ and not significantly impact the environment in the process be my guest. Note that we are talking about investment figures of the same order of magnitude required to significantly reduce the emissions of the US. Of course one can always claim that without AI we would spend it all on something equally stupid and wasteful - but then we are back at https://en.wikipedia.org/wiki/Whataboutism.
Aidan, it sounds like you are underplaying both the significant energy demands and the negative environmental impacts of AI tech development and the associated data center construction boom that's been propping up the US economy.
And it is not just the electricity and water needed to directly run these datacenters. This is an interesting article about just one of the many other ripple effects: the huge boom in aluminum demand to build all the server racks and other items at the heart of the data centers. It increases mining, and the smelting activities, that are in the top of the most heavy industrial energy burners around.
To me the most crucial issue is that the success of LLMs, and the hope that it'll lead to AGI, is *not* based on any reasonable theory of intelligence. There are some obvious theoretical problems.
I'm still waiting on a single person to give me a reason why "AGI" is something I or anyone else on the planet should be rooting for. Sounds like it would be a catastrophe for literally everyone.
it is already a catastrophe for almost everyone (except the snake oil salesmen) and as of today it is nowhere in sight except in the marketectures of the said salesmen.
I think that you are asking exactly the right question. I am not sure that it would be a "catastrophe" because fundamentally, an AGI is a ghost. It's something that we are making up, and that can't be substantiated. What does this AGI "God" do? Does it judge, does it condemn, from which principles? It took about 4 billion years for life on Earth to become what it is today, and we certainly can't make sense of it. Good luck to anyone who thinks that AGI is around the corner and will answer our most fundamental questions. Although thinking of it, I think that I have the answer (42 it is)!
This only convinces me if you think I'm dumb enough to actually believe that everyone will go on government welfare, which I am not. You shouldn't be either. There's no welfare utopia coming. That is stupid. All the people who lose their jobs will simply be left to suffer, including you. You are a moron if you think otherwise. Truly.
Thank god someone said. People who want this future seem batshit or only look like at the possible future through rose colored glasses. It’s like Social Media story on steroids all over again
Anyone who believes Big Tech companies are now suddenly building a technology so they can be nice and help us honestly deserves to be ridiculed. It is so incredibly obvious that they and the government are investing in AI and robotics research so that they can lock all of us out of employment, let us die, and be the overlords of obedient robots that continue to enrich them. And for some reason, academics and tech geeks choose to straight up ignore this and say "well what if the AGI is good and it's not used evilly!!!"
It's not far off from me starting a company whose only goal is to build a Death Star to vaporize the Earth in an instant, and as I post about my company's progress, VC investors throw billions at me and dorks with PhDs talk about how "exciting" and "fascinating" my research is.
US economy is about 70 percent consumption. Top of the power pyramid are the interests behind the central banks. If no one has any money, where is profit going to come from and the ability to pay back the loans; freshly created money from which the banks gain interest, power, control. Why loan money into existence when there's no one, no company to pay it back, because there's no service or good to sell anymore.
What happens to rent seeking, when no one can make payments.
Maybe there's some utterly dystopic world awaiting, killing off 90 percent, but again what about money, power. I just don't see how this is viable, except for it to turn into something no humans would want to live in.
Human beings need jobs. They need something to do. Even IF we could put everyone on welfare, that is far from a desirable outcome. When people lose their job, it is devastating to their family, and their community.
The Pyramids are still there. Most of the hardware will be obsolete in a few years and the building will resemble rusting industrial sites of the 19th century within 10 years
"The thing which I think is the most fundamental is that these models somehow just generalize dramatically worse than people. And it’s super obvious. That seems like a very fundamental thing.”
That is because LLMs generate quiddity without haecceity. They reproduce generalizable patterns but never the situated, temporally-embedded specificity that makes human generalization possible which is why they suck worse at it than humans.
Jeffrey Anthony: I think the "haecceity" issue is key--"thisness" by definition is unavailable to machines or data centers or whatever name they go by, even though they, like everything and everyone else in the world, are caught up in the movements and particularities of the time and space continuum as a matter of merely being in the stream of history. Today's THIS context is related to but not the same as tomorrow's this context. It's a matter of meaning flow and development where, on principle, 1 + 1 never make two.
It's a huge philosophical problem having to do with the difference between ongoing performance and the human ability to reflect on and consider here-and-now specifics (as you say. and where reflection is also itself a performance-- there is a doubling-back, and a one-time meaningful thinking set that most of the time becomes unimportant in the next THIS go-round.
But I am not in the that loop, so I have often wondered about what's going on in the cognitive/philosophical backdrop of the (ahem) minds of those involved, besides money, that is . . . . and if the programmers understand haecceity or quiddity, for that matter (depending on one's refined definition of either term). I was glad to see your references.
A neat thought. Does it imply that the idea of disembodied intelligence is incoherent? Because you could argue a clock is in some sense temporally embodied. I think it means there has to be embodiment and purpose.
Mike: Albert Einstein wrote about the mind-to-reality relationship: “Don’t pay any attention to what the scientists say to you, watch what they do.”
It's not disembodiment, but rather that even "bodies" and sensing the sensible are intelligible/meaningful, and we know anything sensible, not by looking or otherwise sensing, but by asking intelligent questions and reaching an understanding (even knowledge) of them. Extroversion and naive realism as a foundation for one's philosophical understanding will get a person exactly nowhere, or worse. stifle one's questioning and growth and lead us in the wrong direction.
Any knowing we accomplish is about our activities of intelligence inquiring about the intelligibility and meaning of WHAT we are asking about, including their sense-to-sensible relationship. Coming to know in a critical sense, then, is about intelligible and the grasping of sufficient evidence, not by taking an immediate look. What that means in relationship to death is another question altogether; but at least such thinking needs to stand on a correct version of the intelligence-to-intelligibility relationship itself qua reality, and with a matching metaphysics, of course.
However, we always do return the evidence to play out in the sensible because we are sentient beings and the time-space continuum is where the (invisible) principles and laws of science are the most static and predictable (such as they are)--in sense-based materials. Most of us merely confuse the context. But more importantly, because the sensible is also intelligible-meaningful--it's the intelligible-meaningful that we know, with our self-corrective intelligence and critical judgments, and not merely by sensing/looking at the sensible. Once one realizes this movement of mind-to-reality, one can also realize how immersed we already are in a universe of apparently endless intelligibility--and go from there.
But I am far afield from Gary's work here on this blog, but from a philosophical point of view, not really.
With all the circular investing & now government backing, you could almost think they could keep the bubble inflated, strictly to protect their $$$. But the numbers, the P/E, are so impossibly bad. Has there ever been a situation like this before? I don’t think so.
Let's hope they are mainly interested in keeping the party going until the midterms are over... Even with gov't backing this thing has to burst sooner or later, and they must know that as well.
After the midterm elections, they will look toward keeping the bubble blowing until the next presidential election because, you know, it’s the stock market, stupid
Even a non techie like me saw this madness, albeit from a different perspective, from the moment ChatGPT dropped. My first thought was, back then, ooh this is going to be another FTX, but worse for society. But the real question is, imho, why is the tech world gunning so hard for this elusive thing called AGI when we have so many real-world, on-the-ground problems that need our attention, energy, and resources? It might indeed take a nice big global crash to get people back to their senses.
They say that AGI would help us solve those problems, but we already know how to solve them. We just need money to be spent on it. Which it isn't, because of this A.I. junk.
Yes, we already know—and have for a while. Anything we still don't know, we can certainly figure out, and if that process requires the help of AI systems, that's all good, but they should be utilized and leveraged with those goals in mind, not in achieving some kind of god-level superintelligence. We've already got that... in religion. And, quite simply, in Nature.
We had a pretty big global crash in 2008 and that didn't change much.
I wonder more and more what level of callousness, hubris, greed, and lies are actually fueling this AI arms race from a handful of wealthy geniuses. And what's the level of actual interest in gunning for something transformative that's never been done. I doubt it's simple. I think some of them think or hope they can create AGI and that it's a good thing.
From a technical viewpoint, in their domains, many of the tech founders appear to be truly exceptional - Zuckerberg appears to be almost a coding prodigy. What is truly technical achievement and innovation (individually or as a group) versus myth is hard to say, especially from the outside; I know nothing about coding or business. But as much as I dislike Meta, Zuckerberg and company built something noteworthy and huge. As did Musk, with Paypal and Space-X. And Bezos with Amazon.
But have we ever seen an elite more divorced from society or experience or values? Sure, you're amazing at coding and raising money - props. Now let's let them run the world. They seem to have limited or no interest in nature, society, art, philosophy, politic science, history, or even people. Wisdom and empathy > raw intelligence (whatever that means).
Yes to wisdom and empathy, a thousand times over. Intelligence is much more than just book smarts. There are diverse kinds of intelligence, including that of the other living beings sharing this planet with us, but those are just called "animals" and "plants" and considered as somehow less worthy. It takes a certain amount of callousness, ruthlesness, and cynicism to reach the status of a billionaire (not always ofc), but the other side of this problem is not the billionaires. It's the millions of people who admire them and aspire to be like them. If that weren't the case, the billionaires wouldn't have this outsize influence.
I'm not a scientist but I have worked with computers since the 1960s in all kinds of situations and it strikes me that there is a fundamental misconception/misunderstanding about the working of consciousness, be it human or animal. AI is mechanistic, crude, brute force, simplistic but above all non-quantum. It's quite obvious that the brain works on the quantum level; it utilises the nature of quantum mechanics. It's the Heisenberg Principle, it's quantum equivilence, it can never be truly understood as it's conscious matter trying perceive itself and the act changes both viewer and the viewed. Yes, more, faster, larger, computing power but it's still just a pathetic facsimile, an imitation. Only the living brain/body, embedded in reality can think, create, feel. AI is capitalism driven insane by its vanity, by it hubris, by its greed, by its psychopathy.
William, do you have any references for research showing that "the brain works on a quantum level...?" Thank you. (I need data, even for that which is "...quite obvious." I'm not being disrespectful.)
Caveat lector. My degrees are in History and Chinese language (大家好) but I do read a lot.
Try reading "Reinventing the Sacred" by Stuart Kauffman where he devotes a chapter to the Quantum Brain (chapter 13). He argues that the human brain may have something similar to the antenna protein that is part of photosynthesis. This protein is divided into quantum coherent and decoherent groups speeding up the process of photosynthesis as a sort of quantum chemical catalyst.
If you do not recognize Stuart Kauffman then look him up because it would take too long to explain his contributions to Complexity science.
The book is a critique of physics and determinism. If the brain is not quantum then it is deterministic and we are left with an epiphenomenal brain that observes the world but cannot act on it. A deterministic brain has no free will with everything flowing from the location and movement of particles in the Big Bang in long chains of causation that extend to the present. Human are just floating clouds of particles where all the explanatory arrows point down to particles or strings or whatever is there per Steven Weinberg. AI does not make much sense in such a world.
My brief summary mangles his argument and does not do it justice.
Kauffman provides at least the bare outlines of a possible research program and what you could look for. The trick is devising the experiments to test it.
We still do not really understand how the brain works. Lots of work still needs to be done.
Metisse; no, nothing but I'm sure it exists somewhere, it's just so bloody obvious that the highest, organised form of matter, the human brain, exists courtesy of quantum mechanics but as I said, it's the brain trying to perceive itself and in doing so, alters the relationship.
The word “quantum” is only needed if you want funding for pretty much any research these days.
Except for AI, that is.
But I’m surprised that “quantum AI” has not yet caught on even for that.
Ironically, when you see the word “quantum” preceding any field except mechanics, you can be assured that most of what is being sold is quantum snake oil.
The notion that humans are all powerful is frankly sick. Humans have a severe limit on their conscious mind that only four pieces of information can be variable - FourPiecesLimit.com. As things grow more complex, they make a dirty mess. We need machines that can handle thousands of interacting things at once because we can't do that.
"conscious" is a very low bar - your conscious mind is very limited in capacity. It doesn't allow any comparison with something that can hold tens of thousands of possibilities in its head at one time.
I can easily conceptualise and imagine the effect of varying arbitrary numbers of parameters using mathematical tools that mean I can think in terms of parameter sets, not individual parameters. So the four parameter claim may be ‘true’, but it is also meaningless, as it is based on a failure to consider the power of viewpoint shifting.
Sorry, it killed my best example - a person driving a car and holding a mobile phone to their ear, and staring out the side window, before they drive into a tree. You can say how marvelous the brain is, then you have to come to terms with how stupid it can be.
helicopter with 6 degrees of freedom - very dangerous to fly
Economic modelling - a push to take the work away from economists because they can't handle the complexity and make simplifications, which turn out to be wrong.
Your overall point stands: computers are, and always have been, brute force devices, going back to the first one built by Alan Turing to break the Enigma code. That is not how the human mind operates, and it's not clear at all that a brute force device can imitate the human mind. There might be another way to build a computer, but I haven't heard of anyone proposing to use such a thing to power A.I.
A computer can simulate an undirected network of objects and operators, with free resources, so the network can extend itself. This can be used to read and "understand" English text, with its large vocabulary (50,000 words), its figurative meanings (a walk in the park - about 10,000), its elisions (a movie set in Hawaii).
The combination allows a reasonable chance of handling large pieces of text - 1000 page legislation, 100,000 page specification of a jet fighter. Brute force is not a good approach, but dynamic and highly detailed is. We call it Semantic AI. It has been possible since the 1990s.
I keep reading that AI will make new scientific discoveries. However, it seems prudent to me to wait until it has actually done that in a lab before making the sort of massive investments we have been seeing. I suppose that's why I'm not a millionaire.
It's hard to be "blindsided" unless you are blind.
There's a level of simple-mindedness here by supposed "scientists" that reveals much about the shallowness (or even absence) of anything like a field.
For example, none of the deep systems researchers of the ARPA and Parc eras in the 60s and 70s were this naive, and wouldn't have committed epistemological blunders this egregiously.
Note that we live in the 21st century with technologies and ideas available that can allow us to know what is around us 360 right and left and north and south.
Only people who don't understand this are still "blind in the backs of their heads". This is also a metaphor for the kind of naivete and simplemindedness I'm complaining about.
You wear a hat with cameras in all directions? You walk about constantly turning around to check all angles? I don't think you're careful with words and meanings, I think you're performing, pretending your thoughtless comment was innocent pedantry, and trying to transfer responsibility to me for correctly interpreting it.
Well, I'm not selling anything, not was I trying to be literal. Obviously, I also failed at being clear.
But, surely I was combining literality and metaphor -- following the way that Gary used "blindness" (he didn't mean "not being able to use one's eyes") as a metaphor for brains not being able to handle what's around.
This is also why I used "technologies and ideas available" above In the same spirit that pointed out the lack of use of these new tools to avoid various blindnesses.
And, no I don't wear a hat with cameras in all directions. However, I've been a scientist for almost 70 years, and I do " constantly turn around to check all (as many as possible) ideational angles". This is because the doorway into scientific thinking is the realization that "The world is not what it seems" (to our commonsense minds). In other words, my "all directional hat" is internal.
And I can assure you that there was a lot of thought behind my comment. (That's my profession and habit.)
The AI Hype Machine Runs on Ignorance About Human Intelligence (HI). If It's Not Ignorance, It's Lies.
As a neuropsychiatrist, I've spent a decade studying one fundamental question: how does human intelligence actually compare to AI?
In 2014, I wrote that the entire AI-HI comparison was a myth. When ChatGPT exploded onto the scene, my conclusion hardened: if market valuations of LLMs rest on the assumption that they're approaching human-level intelligence, we're living inside a massive hype bubble. Because they're not even close.
The problem? The AI-HI comparison is built on a foundation of profound ignorance about the second half of the equation: human intelligence itself. Nobody, not the AI providers, not the consultants, not the breathless tech journalists, and even some AI-scientists, actually seem to understand what they're comparing AI to.
What we're left with is marketing claims that are, to put it bluntly, pure baloney.
If they really don't know even the basics of HI then their claims about AI, their expectations and fears are pure ignorance. If they are not ignorant they are lying. A one trillion lie. I see no other possibility. Do You?
what is tragic and has been true now for decades is that what is driving research and development thinking and activities is the lust for trillions by a few and not research to better understand and apply intelligence with the best values.
It’s also concerning that the data centres popping up everywhere to prop all this up come at such a high cost in terms of power and in particular, water. The race to profit is causing water insecurity for communities. This is not progress.
I’ve seen conversations inside the AI community around the ethics of AI in terms of its usage, but almost nothing about the ethics of the infrastructure that supports it all. The cost to society is higher than a potential financial collapse.
There is nothing about this that is "progress". We don't get high-speed rail, affordable, rigorous higher education, complete health care, clean and beautiful cities, good nutritious food, or anything else *we want to have*. Instead, psychopaths in our government in collusion with psychopaths in Silicon Valley have decided that they get to decide our future for us, one where we're irrelevant, politically impotent, impoverished, etc.,.
Whatever *this* is, I want absolutely none of it.
The first rule of problem solving is defining the problem one is trying to solve.
What exactly IS “the AGI problem”?
And why is not having AGI (whatever it is) even a “problem”?
No one seems to know.
Well you just described one of the keys issues here, because what you're describing is the classic problem solving approach every freshman engineering student has drilled into their head. But, many of the people in SV aren't actually engineers in the literal sense, despite their incessant overuse of the title - and Jensen Huang as an EE defies this analysis somewhat. Insofar as I can detect such thinking by the SV Psychopaths (that should be the name of a baseball team in Palo Alto or San Jose), they've identified humanity and humanness as the problem to be solved. Fatigue is a problem. Compensation is a problem. Quality of Life is a problem. Physical Labor is a problem. They see all of the pieces of being a human and engaging in human activity as contemptible, while lying to us when they say that they want to free us to do the “meaningful things”. They see AGI as their triumph over our inherent contemptibility and as I see it, their solution to the problem of Us as a species.
Agreed. I was going to put it as "do we want AGI? if so why?"
Fucking brilliantly said
Prop up the economy at the expense of our environment is foolish. Memory chip shortage too now. Wasted resources without productivity growth.
The environmental complaints are completely divorced from reality and are a massive distraction from important water and electricity uses. Masley has covered this extensively. https://andymasley.substack.com/p/a-cheat-sheet-for-conversations-about?open=false#%C2%A7this-post-in-a-nutshell
I appreciate such analyses, but the fact is that we need electricity for our lights, food to eat, heat for our homes, etc. Pollution caused by these activities is not the same as pollution caused by A.I. which we *do not need*, and which does us no good.
This is a cop out. Orders of magnitude more energy is used on other things we don't need, like doom-scrolling. AI is singled out as an especially bad waste of resources despite being a negligible use of resources, and as the analysis shows, using so little resources that it arguably SAVES us resources because if we weren't using AI we'd be doing something else that is almost certainly more resource intensive.
The reason it's singled out is because of misinformation about the resource consumption, not because people understand how little resources it uses and make a clear-headed decision that it's important anyway
A copout? Project much? Your comments are sophistic and grossly intellectually dishonest. The idiotic post you linked attacks a complete strawman ... the issue isn't people using chatbots, it's on the other end -- the resource usage of the AI computers.
What about the material, financial and human resources flushed down the drain, in the pursuit of a goal, AGI, that is completely divorced from reality?
https://en.wikipedia.org/wiki/Whataboutism
What about "Orders of magnitude more energy is used on other things we don't need, like doom-scrolling." ?
This is core to the argument that "AI is a significant environmental burden" is divorced from reality. Saying "what about these other potential problems that aren't about the environment" is not relevant to the point I'm making.
And we don't even necessarily disagree on that other topic either. I'd be happy to discuss it with you on a different thread, or once we're finished talking about the topic I raised. Are we in agreement that the environmental complaints about AI are completely divorced from reality and are a massive distraction from important water and electricity uses?
I was merely pointing out that the water and electricity used per plain text prompt of an average ChatGPT user is just a tiny fraction of the overall, including environmental, cost of AI. It completely ignores computational and economic resources that go into training, CoT/reasoning, agents, video and image generation. If you want to claim that one can spend on the order of a trillion $ and not significantly impact the environment in the process be my guest. Note that we are talking about investment figures of the same order of magnitude required to significantly reduce the emissions of the US. Of course one can always claim that without AI we would spend it all on something equally stupid and wasteful - but then we are back at https://en.wikipedia.org/wiki/Whataboutism.
Aidan, it sounds like you are underplaying both the significant energy demands and the negative environmental impacts of AI tech development and the associated data center construction boom that's been propping up the US economy.
Check out the Center for Biological Diversity's recent report entitled, Data Crunch, which is linked within the following press release: https://biologicaldiversity.org/w/news/press-releases/report-ai-data-center-boom-threatens-us-climate-goals-2025-10-29/
And it is not just the electricity and water needed to directly run these datacenters. This is an interesting article about just one of the many other ripple effects: the huge boom in aluminum demand to build all the server racks and other items at the heart of the data centers. It increases mining, and the smelting activities, that are in the top of the most heavy industrial energy burners around.
It is way worse than meets the eye initially.
https://finance.yahoo.com/news/ai-data-centers-massive-demand-for-aluminum-is-crushing-the-us-aluminum-industry-110035572.html?guccounter=1
To me the most crucial issue is that the success of LLMs, and the hope that it'll lead to AGI, is *not* based on any reasonable theory of intelligence. There are some obvious theoretical problems.
https://petervoss.substack.com/p/the-7-deadly-sins-of-agi-design
I'm still waiting on a single person to give me a reason why "AGI" is something I or anyone else on the planet should be rooting for. Sounds like it would be a catastrophe for literally everyone.
it is already a catastrophe for almost everyone (except the snake oil salesmen) and as of today it is nowhere in sight except in the marketectures of the said salesmen.
I think that you are asking exactly the right question. I am not sure that it would be a "catastrophe" because fundamentally, an AGI is a ghost. It's something that we are making up, and that can't be substantiated. What does this AGI "God" do? Does it judge, does it condemn, from which principles? It took about 4 billion years for life on Earth to become what it is today, and we certainly can't make sense of it. Good luck to anyone who thinks that AGI is around the corner and will answer our most fundamental questions. Although thinking of it, I think that I have the answer (42 it is)!
Real AGI, deployed democratically and with a goal to boost human agency (what we're working on) will be highly beneficial.
https://srinipagidyala.substack.com/p/rip-techbro-era-20082025-end-of-attention
https://petervoss.substack.com/p/imagine
This only convinces me if you think I'm dumb enough to actually believe that everyone will go on government welfare, which I am not. You shouldn't be either. There's no welfare utopia coming. That is stupid. All the people who lose their jobs will simply be left to suffer, including you. You are a moron if you think otherwise. Truly.
Thank god someone said. People who want this future seem batshit or only look like at the possible future through rose colored glasses. It’s like Social Media story on steroids all over again
Anyone who believes Big Tech companies are now suddenly building a technology so they can be nice and help us honestly deserves to be ridiculed. It is so incredibly obvious that they and the government are investing in AI and robotics research so that they can lock all of us out of employment, let us die, and be the overlords of obedient robots that continue to enrich them. And for some reason, academics and tech geeks choose to straight up ignore this and say "well what if the AGI is good and it's not used evilly!!!"
It's not far off from me starting a company whose only goal is to build a Death Star to vaporize the Earth in an instant, and as I post about my company's progress, VC investors throw billions at me and dorks with PhDs talk about how "exciting" and "fascinating" my research is.
US economy is about 70 percent consumption. Top of the power pyramid are the interests behind the central banks. If no one has any money, where is profit going to come from and the ability to pay back the loans; freshly created money from which the banks gain interest, power, control. Why loan money into existence when there's no one, no company to pay it back, because there's no service or good to sell anymore.
What happens to rent seeking, when no one can make payments.
Maybe there's some utterly dystopic world awaiting, killing off 90 percent, but again what about money, power. I just don't see how this is viable, except for it to turn into something no humans would want to live in.
Human beings need jobs. They need something to do. Even IF we could put everyone on welfare, that is far from a desirable outcome. When people lose their job, it is devastating to their family, and their community.
My question is, which prominent A.I. backer is pro-democracy?
There’s not a single Jeffersonian among them.
AI backers are today’s Alexander Hamiltons (sans the intelligence, wisdom and foresight, of course)
Good question!
I guess we'll have to become more prominent with our AGI system
https://petervoss.substack.com/p/insa-integrated-neuro-symbolic-architecture
Or maybe you just shouldn't make an obviously bad technology
Real AGI? How about a real proposal?
Yes, we provide that to real investors
well-said
Nice to see you here Peter!
For the latest on reducing hallucinations and implementing AGI, please refer to the following papers:
Substack Archives —
https://chrispwendling.substack.com/archive
And-
http://www.itrac.com/EGM_Document_Index.htm
Contact — cpwendling [at] yahoo [dot] com.
Could we be heading for the biggest miss-allocation of capital ever?
yes
Already there
Don’t forget the Pyramids.
The Pyramids are still there. Most of the hardware will be obsolete in a few years and the building will resemble rusting industrial sites of the 19th century within 10 years
That's fair, though a big, fancy tomb is more useful than the average slop-A.I. output.
Some believe that the Great Pyramid was a power plant, to power the EgyptNvidia chips needed for AEI (Artificial Egyptian Intelligence)
There is actually support for this hypothesis in some of the AI-roglyphics.
Both are pyramid schemes
"The thing which I think is the most fundamental is that these models somehow just generalize dramatically worse than people. And it’s super obvious. That seems like a very fundamental thing.”
That is because LLMs generate quiddity without haecceity. They reproduce generalizable patterns but never the situated, temporally-embedded specificity that makes human generalization possible which is why they suck worse at it than humans.
Loving how simultaneously high and low brow this comment is, chef’s kiss work.
It's amazing, I just yesterday learned what "quiddity" means, and here it is in the wild.
Jeffrey Anthony: I think the "haecceity" issue is key--"thisness" by definition is unavailable to machines or data centers or whatever name they go by, even though they, like everything and everyone else in the world, are caught up in the movements and particularities of the time and space continuum as a matter of merely being in the stream of history. Today's THIS context is related to but not the same as tomorrow's this context. It's a matter of meaning flow and development where, on principle, 1 + 1 never make two.
It's a huge philosophical problem having to do with the difference between ongoing performance and the human ability to reflect on and consider here-and-now specifics (as you say. and where reflection is also itself a performance-- there is a doubling-back, and a one-time meaningful thinking set that most of the time becomes unimportant in the next THIS go-round.
But I am not in the that loop, so I have often wondered about what's going on in the cognitive/philosophical backdrop of the (ahem) minds of those involved, besides money, that is . . . . and if the programmers understand haecceity or quiddity, for that matter (depending on one's refined definition of either term). I was glad to see your references.
A neat thought. Does it imply that the idea of disembodied intelligence is incoherent? Because you could argue a clock is in some sense temporally embodied. I think it means there has to be embodiment and purpose.
Mike: Albert Einstein wrote about the mind-to-reality relationship: “Don’t pay any attention to what the scientists say to you, watch what they do.”
It's not disembodiment, but rather that even "bodies" and sensing the sensible are intelligible/meaningful, and we know anything sensible, not by looking or otherwise sensing, but by asking intelligent questions and reaching an understanding (even knowledge) of them. Extroversion and naive realism as a foundation for one's philosophical understanding will get a person exactly nowhere, or worse. stifle one's questioning and growth and lead us in the wrong direction.
Any knowing we accomplish is about our activities of intelligence inquiring about the intelligibility and meaning of WHAT we are asking about, including their sense-to-sensible relationship. Coming to know in a critical sense, then, is about intelligible and the grasping of sufficient evidence, not by taking an immediate look. What that means in relationship to death is another question altogether; but at least such thinking needs to stand on a correct version of the intelligence-to-intelligibility relationship itself qua reality, and with a matching metaphysics, of course.
However, we always do return the evidence to play out in the sensible because we are sentient beings and the time-space continuum is where the (invisible) principles and laws of science are the most static and predictable (such as they are)--in sense-based materials. Most of us merely confuse the context. But more importantly, because the sensible is also intelligible-meaningful--it's the intelligible-meaningful that we know, with our self-corrective intelligence and critical judgments, and not merely by sensing/looking at the sensible. Once one realizes this movement of mind-to-reality, one can also realize how immersed we already are in a universe of apparently endless intelligibility--and go from there.
But I am far afield from Gary's work here on this blog, but from a philosophical point of view, not really.
The key is that AI is embotied but not embodied.
It has had a pretrainal LLMbotomy* which has destroyed its ability to think.
*aka “AIs pick LLMbotomy”
https://claireprentice.org/wp-content/uploads/2022/05/diagram-showing-walter-freemans-transorbital-lobotomy-x.jpg
That is certainly possible.
That’s a very educational comment, original topic aside - thank you!
For the latest on reducing hallucinations and implementing AGI, please refer to the following papers:
Substack Archives —
https://chrispwendling.substack.com/archive
And-
http://www.itrac.com/EGM_Document_Index.htm
Contact — cpwendling [at] yahoo [dot] com.
They might generate quiddity but they sure don’t generate liquidity.
Except of the excremental kind
With all the circular investing & now government backing, you could almost think they could keep the bubble inflated, strictly to protect their $$$. But the numbers, the P/E, are so impossibly bad. Has there ever been a situation like this before? I don’t think so.
The big question now is how many billions of taxpayer dollars will go into prolonging the bubble ...
Let's hope they are mainly interested in keeping the party going until the midterms are over... Even with gov't backing this thing has to burst sooner or later, and they must know that as well.
After the midterm elections, they will look toward keeping the bubble blowing until the next presidential election because, you know, it’s the stock market, stupid
Even a non techie like me saw this madness, albeit from a different perspective, from the moment ChatGPT dropped. My first thought was, back then, ooh this is going to be another FTX, but worse for society. But the real question is, imho, why is the tech world gunning so hard for this elusive thing called AGI when we have so many real-world, on-the-ground problems that need our attention, energy, and resources? It might indeed take a nice big global crash to get people back to their senses.
They say that AGI would help us solve those problems, but we already know how to solve them. We just need money to be spent on it. Which it isn't, because of this A.I. junk.
Yes, we already know—and have for a while. Anything we still don't know, we can certainly figure out, and if that process requires the help of AI systems, that's all good, but they should be utilized and leveraged with those goals in mind, not in achieving some kind of god-level superintelligence. We've already got that... in religion. And, quite simply, in Nature.
We had a pretty big global crash in 2008 and that didn't change much.
I wonder more and more what level of callousness, hubris, greed, and lies are actually fueling this AI arms race from a handful of wealthy geniuses. And what's the level of actual interest in gunning for something transformative that's never been done. I doubt it's simple. I think some of them think or hope they can create AGI and that it's a good thing.
From a technical viewpoint, in their domains, many of the tech founders appear to be truly exceptional - Zuckerberg appears to be almost a coding prodigy. What is truly technical achievement and innovation (individually or as a group) versus myth is hard to say, especially from the outside; I know nothing about coding or business. But as much as I dislike Meta, Zuckerberg and company built something noteworthy and huge. As did Musk, with Paypal and Space-X. And Bezos with Amazon.
But have we ever seen an elite more divorced from society or experience or values? Sure, you're amazing at coding and raising money - props. Now let's let them run the world. They seem to have limited or no interest in nature, society, art, philosophy, politic science, history, or even people. Wisdom and empathy > raw intelligence (whatever that means).
Yes to wisdom and empathy, a thousand times over. Intelligence is much more than just book smarts. There are diverse kinds of intelligence, including that of the other living beings sharing this planet with us, but those are just called "animals" and "plants" and considered as somehow less worthy. It takes a certain amount of callousness, ruthlesness, and cynicism to reach the status of a billionaire (not always ofc), but the other side of this problem is not the billionaires. It's the millions of people who admire them and aspire to be like them. If that weren't the case, the billionaires wouldn't have this outsize influence.
I'm not a scientist but I have worked with computers since the 1960s in all kinds of situations and it strikes me that there is a fundamental misconception/misunderstanding about the working of consciousness, be it human or animal. AI is mechanistic, crude, brute force, simplistic but above all non-quantum. It's quite obvious that the brain works on the quantum level; it utilises the nature of quantum mechanics. It's the Heisenberg Principle, it's quantum equivilence, it can never be truly understood as it's conscious matter trying perceive itself and the act changes both viewer and the viewed. Yes, more, faster, larger, computing power but it's still just a pathetic facsimile, an imitation. Only the living brain/body, embedded in reality can think, create, feel. AI is capitalism driven insane by its vanity, by it hubris, by its greed, by its psychopathy.
" It's quite obvious that the brain works on the quantum level"
No, it certainly isn't, crank.
Any references to actual studies on the “obvious” quantum effects in human brain?
William, do you have any references for research showing that "the brain works on a quantum level...?" Thank you. (I need data, even for that which is "...quite obvious." I'm not being disrespectful.)
Caveat lector. My degrees are in History and Chinese language (大家好) but I do read a lot.
Try reading "Reinventing the Sacred" by Stuart Kauffman where he devotes a chapter to the Quantum Brain (chapter 13). He argues that the human brain may have something similar to the antenna protein that is part of photosynthesis. This protein is divided into quantum coherent and decoherent groups speeding up the process of photosynthesis as a sort of quantum chemical catalyst.
If you do not recognize Stuart Kauffman then look him up because it would take too long to explain his contributions to Complexity science.
The book is a critique of physics and determinism. If the brain is not quantum then it is deterministic and we are left with an epiphenomenal brain that observes the world but cannot act on it. A deterministic brain has no free will with everything flowing from the location and movement of particles in the Big Bang in long chains of causation that extend to the present. Human are just floating clouds of particles where all the explanatory arrows point down to particles or strings or whatever is there per Steven Weinberg. AI does not make much sense in such a world.
My brief summary mangles his argument and does not do it justice.
Kauffman provides at least the bare outlines of a possible research program and what you could look for. The trick is devising the experiments to test it.
We still do not really understand how the brain works. Lots of work still needs to be done.
OK, free to call me stupid now. : )
Metisse; no, nothing but I'm sure it exists somewhere, it's just so bloody obvious that the highest, organised form of matter, the human brain, exists courtesy of quantum mechanics but as I said, it's the brain trying to perceive itself and in doing so, alters the relationship.
It's not obvious to anyone intelligent, knowledgeable, or intellectually honest.
The word “quantum” is only needed if you want funding for pretty much any research these days.
Except for AI, that is.
But I’m surprised that “quantum AI” has not yet caught on even for that.
Ironically, when you see the word “quantum” preceding any field except mechanics, you can be assured that most of what is being sold is quantum snake oil.
The notion that humans are all powerful is frankly sick. Humans have a severe limit on their conscious mind that only four pieces of information can be variable - FourPiecesLimit.com. As things grow more complex, they make a dirty mess. We need machines that can handle thousands of interacting things at once because we can't do that.
He didn't say that humans are all-powerful, just that they are conscious. Being concious does not make you God afaict.
"conscious" is a very low bar - your conscious mind is very limited in capacity. It doesn't allow any comparison with something that can hold tens of thousands of possibilities in its head at one time.
I can easily conceptualise and imagine the effect of varying arbitrary numbers of parameters using mathematical tools that mean I can think in terms of parameter sets, not individual parameters. So the four parameter claim may be ‘true’, but it is also meaningless, as it is based on a failure to consider the power of viewpoint shifting.
Sorry, it killed my best example - a person driving a car and holding a mobile phone to their ear, and staring out the side window, before they drive into a tree. You can say how marvelous the brain is, then you have to come to terms with how stupid it can be.
helicopter with 6 degrees of freedom - very dangerous to fly
Economic modelling - a push to take the work away from economists because they can't handle the complexity and make simplifications, which turn out to be wrong.
Your overall point stands: computers are, and always have been, brute force devices, going back to the first one built by Alan Turing to break the Enigma code. That is not how the human mind operates, and it's not clear at all that a brute force device can imitate the human mind. There might be another way to build a computer, but I haven't heard of anyone proposing to use such a thing to power A.I.
A computer can simulate an undirected network of objects and operators, with free resources, so the network can extend itself. This can be used to read and "understand" English text, with its large vocabulary (50,000 words), its figurative meanings (a walk in the park - about 10,000), its elisions (a movie set in Hawaii).
The combination allows a reasonable chance of handling large pieces of text - 1000 page legislation, 100,000 page specification of a jet fighter. Brute force is not a good approach, but dynamic and highly detailed is. We call it Semantic AI. It has been possible since the 1990s.
So calling scaling laws ‘laws’ is a marketing ploy? Like if “once you pop, you can’t stop” became the Law of Pringles…
Of course it is!
It certainly isn't a scientific law.
I keep reading that AI will make new scientific discoveries. However, it seems prudent to me to wait until it has actually done that in a lab before making the sort of massive investments we have been seeing. I suppose that's why I'm not a millionaire.
No, that is why you are not a billionaire! :)
Better late than never
The greed and vanity that makes the clever so dumb.
Hi Gary
It's hard to be "blindsided" unless you are blind.
There's a level of simple-mindedness here by supposed "scientists" that reveals much about the shallowness (or even absence) of anything like a field.
For example, none of the deep systems researchers of the ARPA and Parc eras in the 60s and 70s were this naive, and wouldn't have committed epistemological blunders this egregiously.
Best cheers
Alan
The back of my head is blind but I am not. It is very easy to be blindsided if you are not looking.
Note that we live in the 21st century with technologies and ideas available that can allow us to know what is around us 360 right and left and north and south.
Only people who don't understand this are still "blind in the backs of their heads". This is also a metaphor for the kind of naivete and simplemindedness I'm complaining about.
You're kind of a dick, ain't ya, fella?
Sorry that you took offense. I'm just careful with words and meanings.
You wear a hat with cameras in all directions? You walk about constantly turning around to check all angles? I don't think you're careful with words and meanings, I think you're performing, pretending your thoughtless comment was innocent pedantry, and trying to transfer responsibility to me for correctly interpreting it.
I'm not buying it.
Well, I'm not selling anything, not was I trying to be literal. Obviously, I also failed at being clear.
But, surely I was combining literality and metaphor -- following the way that Gary used "blindness" (he didn't mean "not being able to use one's eyes") as a metaphor for brains not being able to handle what's around.
This is also why I used "technologies and ideas available" above In the same spirit that pointed out the lack of use of these new tools to avoid various blindnesses.
And, no I don't wear a hat with cameras in all directions. However, I've been a scientist for almost 70 years, and I do " constantly turn around to check all (as many as possible) ideational angles". This is because the doorway into scientific thinking is the realization that "The world is not what it seems" (to our commonsense minds). In other words, my "all directional hat" is internal.
And I can assure you that there was a lot of thought behind my comment. (That's my profession and habit.)
The AI Hype Machine Runs on Ignorance About Human Intelligence (HI). If It's Not Ignorance, It's Lies.
As a neuropsychiatrist, I've spent a decade studying one fundamental question: how does human intelligence actually compare to AI?
In 2014, I wrote that the entire AI-HI comparison was a myth. When ChatGPT exploded onto the scene, my conclusion hardened: if market valuations of LLMs rest on the assumption that they're approaching human-level intelligence, we're living inside a massive hype bubble. Because they're not even close.
The problem? The AI-HI comparison is built on a foundation of profound ignorance about the second half of the equation: human intelligence itself. Nobody, not the AI providers, not the consultants, not the breathless tech journalists, and even some AI-scientists, actually seem to understand what they're comparing AI to.
What we're left with is marketing claims that are, to put it bluntly, pure baloney.
If they really don't know even the basics of HI then their claims about AI, their expectations and fears are pure ignorance. If they are not ignorant they are lying. A one trillion lie. I see no other possibility. Do You?
You are familiar with Karl Friston’s FEP?
There is a formidable body of work which I haven’t really read up on despite that I know Mark Solms.
I am with you.
what is tragic and has been true now for decades is that what is driving research and development thinking and activities is the lust for trillions by a few and not research to better understand and apply intelligence with the best values.
So true.
Feels like we are on Easter Island, chopping down the last tree.
What about the fact that they are consumed with how they could reach AGI, but they aren’t asking if they should.
And I’m not just talking about the Jurassic Park issue; but the electricity and other resources.
"This technology might take over the world and exterminate humanity. Can I have another billion dollars to develop it?" Sheesh!
The AI scientists are building Jackassic Park