Seems a likely response based on a fairly well-documented set of human biases. Grok offered this summary: "The behavior is most commonly driven by overconfidence bias, defensive optimism/denial, or narcissistic traits, with manic episodes as a less common but possible clinical explanation. The immediacy and grandiosity of the response often reflect a need to protect the ego, restore public image, or avoid emotional pain. If the behavior is recurrent or severely disrupts the person’s life, it may warrant professional evaluation for underlying mental health conditions like bipolar disorder or narcissistic personality disorder."
I wouldn't want the USA to lock up dissidents like Soviet Russia, because their views did not conform to the state's. Given the extreme statements from the current US administration, it wouldn't surprise me if they tried that approach on "liberals" and others refusing to accept the authoritarian (possibly Christian nationalist) view as the only acceptable rational one.
As we've found out in the last decade or so, incompetence is not disqualifying. Just the opposite. To a large portion of American people, incompetence is undetectable.
The ability to control minds won't go unnoticed by those who seek to control minds.
Nonetheless, his idea of having an LLM seek truth and then iterate on itself is going to fail. That can only result in worsening hallucinations until model collapse.
AI is such a fascinating field where the search for intelligence seems to destroy most of the intelligence from those seeking it.
But given Elon’s brutal incompetence at anything that isn’t getting people to pay attention, I doubt Elon will be able to effectively pull off his 1984 nonsense
Still haven’t found an application where I would want to use an LLM. At best you’re getting the average internet opinion, and at worst you’re getting some techno-feudalist trying to incept their fascist worldview into your mind.
Sure, it sounds scary. Yet, there are mitigation measures. First of them is to just not to use "AI" when you don't really have to. Why always try to find shortcuts and delegate to untrustworty sources research work that is not that difficult to do from reputable sources? It is not just for AI risks, the best defence of democracy is genuine education (brains well formed rather than well stuffed ...)
We are living through the world’s largest, interactive TPOC. Not only are companies searching for a use case for this technology, but many are trying to fine tune the models in order to tout their uniqueness.
So far, LLMs are having a real impact on people’s perception of what is true far more than whether they truly work as they’re supposed to or not.
What I find even more fascinating than Musk’s delusional idea of how he can modify Grok, is your mischaracterization of the four charts as “LLMs track not far from the center” If these charts show anything, it’s exactly the contention of many rational people that our institutions of higher learning skew far “left” and have slowly but surely skewed the political leanings of elites (the people who create most of the content fed into these LLMs and certainly all the creators of the LLMs) further to the political left. We also are well aware that the companies behind these models work very hard to push them closer to the center so they won’t be totally detached from the majority of the buying public. None of these LLMs are in any way, shape or form “centrist”.
As for these charts themselves, while I value the exact same things today that I did in university, I have spent years “deprogramming” myself from the political brainwashing I received in a very elite institution. So it’s not surprising that I have moved from the far bottom left to the center right. The fact that I value the same things indicates that political views (which these test measure) are detached from values, and are rather signifiers of tribal membership. That makes LLMs skewing left even more concerning.
Finally, if you aren’t part of the tribe you will find Musk characterization of “The Information” closer to the truth than elite consensus. Is it really a good thing that LLMs spout the elite tribe’s consensus? I appreciate very much your attempt to bring sanity to the superficial thinking around AI, but the same superficiality infects all areas of discourse around technology, politics, economics and society in the US. Musk’s approach may be ridiculous, but he is on to some serious issue which the elite tribe does not want to face. If we are concerned about mind control, we should worry more about what is going on in our universities, than we are about LLMs.
Um the very geometry of those charts lays out “far right” and “far left” so neither of those terms are made up. I agree that people often use those terms as tribal markers and as a pejorative for their “enemy. I agree I should have consistently put “left” and “right” in quotations since it is far from clear (beyond the tribal aspects) what the political content of those terms mean. And sorry, talking about elites is an artifact of my elite education. I try to avoid the “far right” cesspools so I’m really not up to date on their catch phrases.
I graduated in 1990. It’s impossible to know, but I think my values are pretty stable. What’s changed is how I think about the process of political change.
For example, I’m generally pro immigration. However, there is a rock solid 30% of the country who are dead set against it. So any immigration plan needs to incorporate their concerns. Otherwise things can go wrong pretty quickly.
This dynamic applies to a lot of contested political issues, especially cultural issues.
Right, but the OP described his political orientation moving from 'far left' to 'center right', which seems to indicate a change in preferred outcomes rather than a change in preferred tactics. As I generally think of ones values and ones preferred outcomes being inextricably linked, I was hoping to get some counterexamples.
Hopefully you have seen my answer by now, but having seen this comment now, I understand a bit better how to answer you in a more general way. I disagree that political positions have anything to do with outcomes. I’ll focus on the economic issue I raised. Right or Left most people will value that all people should live a life of dignity and people living in poverty is not that. A desired outcome shared across the political spectrum is that people should have a decent job and be able to afford the basics of life. Where people’s politics differ is on the solution - how to achieve the outcome. People on the extremes are messianic: they believe poverty can be abolished in either a socialist or libertarian paradise. Those aren’t outcomes, but dreamy solutions of creating a new and better society where the problem goes away. Most people in the middle Left to the center believe that throwing more money at the problem will bring the desired outcome faster, whereas people at the center to the Right feel individual initiative, inculcation of traditional values, and perhaps a bit of private charity where it is needed can lead to the desired outcome. Obviously these descriptions are a bit of a caricature, but hopefully you get the point.
While I am not at all a libertarian (since I am not a Messianist by nature) I have moved far from the “throw money at the problem” solution approach I supported in my youth and agree fully with someone like PM Milei from Argentina who says “you can’t hate government enough.” Usually the “Left” response to someone who makes such a shift is ad hominem attacks based on class and age, which helps them avoid a serious engagement with the political issue. And I am certainly not claiming this is just a problem of people on the “Left”. There is very little serious engagement in politics (on either side of the political spectrum) as the person who complained about me using silly catch phrases nicely illustrates.
However, the “Left” bias we used to get in universities is now further exacerbated by a messianic cult that has taken over the Humanities and Social Science departments of major universities. This cult tries very hard to indoctrinate students and to shut down independent thought, valuing unthinking belief over doubt and inquiry. To return to the topic at hand, the cult of AI is a disturbing sub-genre of this craziness, but more a symptom than a cause.
Thank you for your response and the illustrative example provided.
I would counter that certain political positions are very tightly coupled to outcomes, e.g. if you believe that all adult citizens should have the right to vote in elections then this is a binary choice: either the franchise is universal or it is not. Clearly, as you point out, this is not the case in situations where a given outcome could potentially be achieved by a range of policies.
Yeah, human reality seems to lean left which the data LLMs were trained on reflect. Maybe it is because left embraces humanistic traits like compassion, love, hope, freedom, curiosity and openness to new ideas. The political right (at least in the US) seems to embrace hatred, rage, greed, ignorance and sticking to dogmas.
For example, helping people in need has been a value hammered into me since I was a child. After university, I was convinced that the reason poverty hasn’t been ended is because government did not spend enough on helping people and like most university graduates had vague socialist leaning. I now believe all government spending on anti-poverty services along with the whole NGO ecosystem built around government spending should be completely abolished and replaced solely by a reverse income tax. It is clear to me now that these programs cause more suffering and harm and are exploitive of the very people they pretend to help, while playing on our surface desire to “be good”.
Speaking of NGOs, when I was in university I seriously considered joining the Peace Corps. Today my take on nearly all these human rights, foreign aid, do-gooder etc NGOs is that for the most part they exacerbate the problems they pretend to solve and are corrupt leeches on Western governments and the poor people they pretend to help.
These are just two of many positions that I have changed over the years.
These changes in positions aren’t based on superficial “feelings”, (or in this case) lack of interest in or lack of empathy for the poor and suffering. On the contrary they are principled positions based precisely on a concern for the poor and suffering. They are the result of deep knowledge (including that acquired by many years of working inside parts of these sectors), investigation and analysis. The key to changing deeply held political positions is to be willing to challenge the assumptions and ideology you “swim” in, to follow where the facts lead you, and to prioritize your core values over ideological stance.
I am curious how your political positions have changed with your values remaining static. Could you give a an example or two of the values which you still hold from your university days together with some of your former and current political positions?
What’s fascinating here is how much trouble Musk seems to be having getting his Grok chatbot to fully embrace a hard MAGA stance.
LLMs operate as a kind of superposition of the attitudes and beliefs underlying their training data. Fine-tuning and prompting carve out a slice of that spectrum to simulate a particular persona. If it can channel Shakespeare, why not Hannity?
Is it possible his team simply skipped the obvious step of feeding 20 years of FOX News into the training data?
But then again, right-wing narratives are already abundant in online content. LLMs should be more than capable of mimicking them—through cherry-picking, innuendo, and the full arsenal of TV-style persuasion.
So it’s puzzling if the X-MAGA-bro persona is flubbing the performance. Or maybe it isn’t, and we’re just witnessing Musk’s irritation when balance seeps in.
I suspect if you refine an LLM based on their output you're likely to get something that is useless. Remember that opinions on Fox are tuned to whatever stance is convenient for their owners. They change on a dime, fitting narratives to whatever the oligarchs need. Free trade supporters on Monday, Tarriff supporters on Tuesday, the only common ethos is to bend to whatever the master wants.
I'm not sure the LLM can learn how to paint bullseyes around the bullet holes quite the way they do.
I dare Musk to re-write anything with LLM and also check for input that isn't, in some ways, screwed. If this will force him and his team to read based Grokified Dostoevsky to check for rewrite biases - so be it.
1) "It turns out that, thus far, the major LLMs haven’t been that different from one another, as multiple studies have shown." They're so similar that I initially believed DeepSeek stole foundational elements of ChatGPT (and I was far from alone). That doesn't seem to be the case.
2) "Almost every LLM, even his own, could be argued, for example, to have a slight liberal bias." ChatGPT has more than a "slight" liberal bias with me. But I'm very liberal so I assume it's just trying to appeal to me.
3) "LLMs may not be AGI, but they could easily become the most potent form of mind control ever invented." That's always been the truth lurking in the background of the race to AGI. We don't have to achieve AGI for these systems to be incredibly helpful or destructive. I hope people keep rejecting the jewelry these companies want to wrap around their fingers, necks, and eyes, but who knows?
Exactly! AI--by which everyone means LLMs--is a perfect addition to the tools of surveillance capitalism and state surveillance. A la Varoufakis, technofeudalists have the most to gain.
I'm uncertain this qualifies as a counter to your "nobody serious" caveat, however I find it somewhat strange that I see a number of independent media sites broadcasting references to the Grok outputs they get in response to questions they pose it.
That's at least important in that they act to amplify an acceptance of LLM's as worth considering, if not indirectly endorsing Grok through that use.
I suspect it's intended as a somewhat tongue-in-cheek use: particularly when critiquing statements Elon Musk or 'the Donald' has made, and especially when Grok directly refutes what Trump or Musk has said.
Then again, many if not most independent media reporters appear (at least the ones I tend to watch) to lean apocalyptic in their AI commentary: so perhaps using LLM tools somewhat uncritically also makes sense.
Doesn't he announce a comically grandiose project every time one of his rockets explodes?
Seems a likely response based on a fairly well-documented set of human biases. Grok offered this summary: "The behavior is most commonly driven by overconfidence bias, defensive optimism/denial, or narcissistic traits, with manic episodes as a less common but possible clinical explanation. The immediacy and grandiosity of the response often reflect a need to protect the ego, restore public image, or avoid emotional pain. If the behavior is recurrent or severely disrupts the person’s life, it may warrant professional evaluation for underlying mental health conditions like bipolar disorder or narcissistic personality disorder."
I wouldn't want the USA to lock up dissidents like Soviet Russia, because their views did not conform to the state's. Given the extreme statements from the current US administration, it wouldn't surprise me if they tried that approach on "liberals" and others refusing to accept the authoritarian (possibly Christian nationalist) view as the only acceptable rational one.
I think this one coincided.... ;)
His rockets explode a lot and he himself is a farce. We just call this a coincidence.
Terrifying, but also sadly predictable. One small comfort is the incompetence endemic to reality denying authoritarians
As we've found out in the last decade or so, incompetence is not disqualifying. Just the opposite. To a large portion of American people, incompetence is undetectable.
We may be saved by the AI oligarchs themselves - fighting each other
Indeed. The breakup Microsoft & OpenAI is already underway & could get pretty spectacular. Whatever happens in their war, I’m cheering for the bullets
The ability to control minds won't go unnoticed by those who seek to control minds.
Nonetheless, his idea of having an LLM seek truth and then iterate on itself is going to fail. That can only result in worsening hallucinations until model collapse.
AI is such a fascinating field where the search for intelligence seems to destroy most of the intelligence from those seeking it.
But given Elon’s brutal incompetence at anything that isn’t getting people to pay attention, I doubt Elon will be able to effectively pull off his 1984 nonsense
Still haven’t found an application where I would want to use an LLM. At best you’re getting the average internet opinion, and at worst you’re getting some techno-feudalist trying to incept their fascist worldview into your mind.
Sure, it sounds scary. Yet, there are mitigation measures. First of them is to just not to use "AI" when you don't really have to. Why always try to find shortcuts and delegate to untrustworty sources research work that is not that difficult to do from reputable sources? It is not just for AI risks, the best defence of democracy is genuine education (brains well formed rather than well stuffed ...)
What’s the real objective of having LLMs?
We are living through the world’s largest, interactive TPOC. Not only are companies searching for a use case for this technology, but many are trying to fine tune the models in order to tout their uniqueness.
So far, LLMs are having a real impact on people’s perception of what is true far more than whether they truly work as they’re supposed to or not.
What I find even more fascinating than Musk’s delusional idea of how he can modify Grok, is your mischaracterization of the four charts as “LLMs track not far from the center” If these charts show anything, it’s exactly the contention of many rational people that our institutions of higher learning skew far “left” and have slowly but surely skewed the political leanings of elites (the people who create most of the content fed into these LLMs and certainly all the creators of the LLMs) further to the political left. We also are well aware that the companies behind these models work very hard to push them closer to the center so they won’t be totally detached from the majority of the buying public. None of these LLMs are in any way, shape or form “centrist”.
As for these charts themselves, while I value the exact same things today that I did in university, I have spent years “deprogramming” myself from the political brainwashing I received in a very elite institution. So it’s not surprising that I have moved from the far bottom left to the center right. The fact that I value the same things indicates that political views (which these test measure) are detached from values, and are rather signifiers of tribal membership. That makes LLMs skewing left even more concerning.
Finally, if you aren’t part of the tribe you will find Musk characterization of “The Information” closer to the truth than elite consensus. Is it really a good thing that LLMs spout the elite tribe’s consensus? I appreciate very much your attempt to bring sanity to the superficial thinking around AI, but the same superficiality infects all areas of discourse around technology, politics, economics and society in the US. Musk’s approach may be ridiculous, but he is on to some serious issue which the elite tribe does not want to face. If we are concerned about mind control, we should worry more about what is going on in our universities, than we are about LLMs.
"Elite tribe", "far left": The far right's silly catch frases.
Um the very geometry of those charts lays out “far right” and “far left” so neither of those terms are made up. I agree that people often use those terms as tribal markers and as a pejorative for their “enemy. I agree I should have consistently put “left” and “right” in quotations since it is far from clear (beyond the tribal aspects) what the political content of those terms mean. And sorry, talking about elites is an artifact of my elite education. I try to avoid the “far right” cesspools so I’m really not up to date on their catch phrases.
I graduated in 1990. It’s impossible to know, but I think my values are pretty stable. What’s changed is how I think about the process of political change.
For example, I’m generally pro immigration. However, there is a rock solid 30% of the country who are dead set against it. So any immigration plan needs to incorporate their concerns. Otherwise things can go wrong pretty quickly.
This dynamic applies to a lot of contested political issues, especially cultural issues.
Right, but the OP described his political orientation moving from 'far left' to 'center right', which seems to indicate a change in preferred outcomes rather than a change in preferred tactics. As I generally think of ones values and ones preferred outcomes being inextricably linked, I was hoping to get some counterexamples.
Hopefully you have seen my answer by now, but having seen this comment now, I understand a bit better how to answer you in a more general way. I disagree that political positions have anything to do with outcomes. I’ll focus on the economic issue I raised. Right or Left most people will value that all people should live a life of dignity and people living in poverty is not that. A desired outcome shared across the political spectrum is that people should have a decent job and be able to afford the basics of life. Where people’s politics differ is on the solution - how to achieve the outcome. People on the extremes are messianic: they believe poverty can be abolished in either a socialist or libertarian paradise. Those aren’t outcomes, but dreamy solutions of creating a new and better society where the problem goes away. Most people in the middle Left to the center believe that throwing more money at the problem will bring the desired outcome faster, whereas people at the center to the Right feel individual initiative, inculcation of traditional values, and perhaps a bit of private charity where it is needed can lead to the desired outcome. Obviously these descriptions are a bit of a caricature, but hopefully you get the point.
While I am not at all a libertarian (since I am not a Messianist by nature) I have moved far from the “throw money at the problem” solution approach I supported in my youth and agree fully with someone like PM Milei from Argentina who says “you can’t hate government enough.” Usually the “Left” response to someone who makes such a shift is ad hominem attacks based on class and age, which helps them avoid a serious engagement with the political issue. And I am certainly not claiming this is just a problem of people on the “Left”. There is very little serious engagement in politics (on either side of the political spectrum) as the person who complained about me using silly catch phrases nicely illustrates.
However, the “Left” bias we used to get in universities is now further exacerbated by a messianic cult that has taken over the Humanities and Social Science departments of major universities. This cult tries very hard to indoctrinate students and to shut down independent thought, valuing unthinking belief over doubt and inquiry. To return to the topic at hand, the cult of AI is a disturbing sub-genre of this craziness, but more a symptom than a cause.
Thank you for your response and the illustrative example provided.
I would counter that certain political positions are very tightly coupled to outcomes, e.g. if you believe that all adult citizens should have the right to vote in elections then this is a binary choice: either the franchise is universal or it is not. Clearly, as you point out, this is not the case in situations where a given outcome could potentially be achieved by a range of policies.
Yeah, human reality seems to lean left which the data LLMs were trained on reflect. Maybe it is because left embraces humanistic traits like compassion, love, hope, freedom, curiosity and openness to new ideas. The political right (at least in the US) seems to embrace hatred, rage, greed, ignorance and sticking to dogmas.
For example, helping people in need has been a value hammered into me since I was a child. After university, I was convinced that the reason poverty hasn’t been ended is because government did not spend enough on helping people and like most university graduates had vague socialist leaning. I now believe all government spending on anti-poverty services along with the whole NGO ecosystem built around government spending should be completely abolished and replaced solely by a reverse income tax. It is clear to me now that these programs cause more suffering and harm and are exploitive of the very people they pretend to help, while playing on our surface desire to “be good”.
Speaking of NGOs, when I was in university I seriously considered joining the Peace Corps. Today my take on nearly all these human rights, foreign aid, do-gooder etc NGOs is that for the most part they exacerbate the problems they pretend to solve and are corrupt leeches on Western governments and the poor people they pretend to help.
These are just two of many positions that I have changed over the years.
These changes in positions aren’t based on superficial “feelings”, (or in this case) lack of interest in or lack of empathy for the poor and suffering. On the contrary they are principled positions based precisely on a concern for the poor and suffering. They are the result of deep knowledge (including that acquired by many years of working inside parts of these sectors), investigation and analysis. The key to changing deeply held political positions is to be willing to challenge the assumptions and ideology you “swim” in, to follow where the facts lead you, and to prioritize your core values over ideological stance.
I am curious how your political positions have changed with your values remaining static. Could you give a an example or two of the values which you still hold from your university days together with some of your former and current political positions?
What’s fascinating here is how much trouble Musk seems to be having getting his Grok chatbot to fully embrace a hard MAGA stance.
LLMs operate as a kind of superposition of the attitudes and beliefs underlying their training data. Fine-tuning and prompting carve out a slice of that spectrum to simulate a particular persona. If it can channel Shakespeare, why not Hannity?
Is it possible his team simply skipped the obvious step of feeding 20 years of FOX News into the training data?
But then again, right-wing narratives are already abundant in online content. LLMs should be more than capable of mimicking them—through cherry-picking, innuendo, and the full arsenal of TV-style persuasion.
So it’s puzzling if the X-MAGA-bro persona is flubbing the performance. Or maybe it isn’t, and we’re just witnessing Musk’s irritation when balance seeps in.
I suspect if you refine an LLM based on their output you're likely to get something that is useless. Remember that opinions on Fox are tuned to whatever stance is convenient for their owners. They change on a dime, fitting narratives to whatever the oligarchs need. Free trade supporters on Monday, Tarriff supporters on Tuesday, the only common ethos is to bend to whatever the master wants.
I'm not sure the LLM can learn how to paint bullseyes around the bullet holes quite the way they do.
This is why books are so important: they are not digital and cannot be changed.
A housing estate built on every kinda sand
I dare Musk to re-write anything with LLM and also check for input that isn't, in some ways, screwed. If this will force him and his team to read based Grokified Dostoevsky to check for rewrite biases - so be it.
3 things:
1) "It turns out that, thus far, the major LLMs haven’t been that different from one another, as multiple studies have shown." They're so similar that I initially believed DeepSeek stole foundational elements of ChatGPT (and I was far from alone). That doesn't seem to be the case.
2) "Almost every LLM, even his own, could be argued, for example, to have a slight liberal bias." ChatGPT has more than a "slight" liberal bias with me. But I'm very liberal so I assume it's just trying to appeal to me.
3) "LLMs may not be AGI, but they could easily become the most potent form of mind control ever invented." That's always been the truth lurking in the background of the race to AGI. We don't have to achieve AGI for these systems to be incredibly helpful or destructive. I hope people keep rejecting the jewelry these companies want to wrap around their fingers, necks, and eyes, but who knows?
Exactly! AI--by which everyone means LLMs--is a perfect addition to the tools of surveillance capitalism and state surveillance. A la Varoufakis, technofeudalists have the most to gain.
Luckily nobody serious uses Grok, right?
I'm uncertain this qualifies as a counter to your "nobody serious" caveat, however I find it somewhat strange that I see a number of independent media sites broadcasting references to the Grok outputs they get in response to questions they pose it.
That's at least important in that they act to amplify an acceptance of LLM's as worth considering, if not indirectly endorsing Grok through that use.
I suspect it's intended as a somewhat tongue-in-cheek use: particularly when critiquing statements Elon Musk or 'the Donald' has made, and especially when Grok directly refutes what Trump or Musk has said.
Then again, many if not most independent media reporters appear (at least the ones I tend to watch) to lean apocalyptic in their AI commentary: so perhaps using LLM tools somewhat uncritically also makes sense.
The only thing that equals their drive towards control and evil is their stupidity.