Omg Gary. Thank you for continuing to highlight the dark side of all this. With such a low-cost barrier to entry, 'scary' doesn't start to describe the potential for misuse.
It is difficult to fully express my thoughts without a lot of swearing, but I'll try.
I am more and more becoming convinced that the djinn is loose, and there is absolutely nothing any great institution, any corporation or government body, can do about it. Guardrails may slow things down, but unbound versions are inevitable. And government legislation won't stop a foreign adversary, or simply some cabal of weirdos running AIs obtained from a dubious .ru or .tor on their own hardware.
In a way, I think I'm glad Bing released in such an utterly "misaligned" state. People need to learn the simple, brutal truth: LLMs are not trustworthy. They are not ethical. These problems will not be resolved; they are inherent to the model's architecture. And they need to know that these tools, though they have positive and constructive uses, will be used to deceive and manipulate. Nobody's going to save us. We, the people, are going to have to learn to live in and navigate this new world.
The notion of a tool being a powerful force for the good as well as something that can be devastating is not a foreign concept to anyone. It's called living in a tech advanced society.
46,000 people on average die of car crashes a year in the US, not counting those maimed and those who die from the pollution of car exhaust etc. And this (the car) is just one of many, many powerful tools that we take for granted (for better or for worse) as essential for our modern life. So, not sure we are treading into any new territory here. Having said that, awareness, hype tamping, thinking ahead -- all of that is absolutely essential to minimize the bad (which can be tackled via legislation, new technology, cultural shifts, etc.). But this shrieking that the world is about to collapse is self-defeating because it will trigger dismissal from those who need to take seriously the warnings that need to be heeded.
Who said anything about the world collapsing? Certainly not me; you've grossly misread me. All I said was that AIs "will be used to deceive and manipulate". And, essentially, that cultural shifts are necessary to combat this, because legislation is slow-moving and limited in scope, and new technology (such as OpenAI's guardrails) has proven unreliable and will not be implemented universally.
I agree, legislation will not solve the problem altoghere, but will be part of the solution. As for cultural shifts, they will happen only if we maintain our democracy and the ability for thinking citizens to fight back and promote the greater common good vs. these infinately deeply pocketed private interests (FB, Amazon, Google, MSFT, etc.). So, I do see value in making noise about the ills of AI, but what we need to move beyond moaning and groaning and to organize.
In the early 80s, I lived close to the Carnegie Museums in Pittsburgh. My son was about 5 or 6 then, and every Sunday afternoon, they had sessions for little kids. CM's Dinosaur Hall was and remains a breathtaking collection of fossils, including an entire T. rex--the one Disney used when they made Fantasia.
Anyway, one day he comes home from his Sunday sessions and announces that "dinosaurs used to drink out of toilets. Like dogs!" I had a hard time not b breaking up laughing at that, but instead led him down the logic path of figuring out where toilets come from and when in the course of civilization they came into being. And whether any dinosaurs (other than alligators and crocodiles) might have been around that late.
The problem with AIs is that most humans either lack the ability to use any form of reason or the willingness to use it. If something is presented to them in an authoritative manner, they'll buy it. The Milgram experiment* demonstrated this in spades, and that extended to actions, not just beliefs. Religious leaders throughout history have gotten people to do extraordinary things based on nothing more than what they tell people. The more disingenuous, the more effective.
More of it. What people who write stuff like that hate most is being ridiculed. It's not about amusement; it's about humiliation. Being laughed at--and not gently--infuriates them. But what can they do about it? They get marginalized. Eventually people start ignoring them and their products.
These developments are truly surprising, including to AI researchers. OpenAI and Microsoft have inadvertently taught us that LLMs are far more unpredictable and unwieldy than the optimists had anticipated. They may find ways to muzzle Sydney personas in order to pursue the technology's commercial value, because that is what companies are designed to do. But we should heed the bigger lesson.
My experience with ChatGPT is that in individual sessions it is very capable---even exhibiting semblances of curiosity and achieving certain levels of commonsense reasoning about the human world---but it is also unreliable and very squirrely, like trying to contain a nuclear fusion plasma in a Tokamak.
That is likely true at a macro level as well. This genie is out of the bottle and will be exploited. Maybe the most prudent immediate reaction really is to scare society to death about the chaos facing us. The bigger immediate risk is not AI agency, but vast AI amplification of human agency in a world where people keep seeking bigger and more devious weapons to achieve their personal and tribal goals.
I worry with you and I have been worrying for a while now how much damage will be done before we come to grips with the nature of our own (human) intelligence and how vulnerable we (all) are. My worries started with the effect of social media on society and how vulnerable we humans are for the suggestions from that landscape (rabbit holes etc.). This is now amplified by the worries of LLMs, fake images and to be expected extra convincing fake audio and video. I worry how much damage will be done. And that is not only direct damage, but these tools can be easily used to undermine facts or steer us towards all forms of rage. How are we ever going to be able to tackle climate change, for instance, if 'conviction change for sale' becomes the norm? Social media, LLMs, and a world with little checks and balances on the influence of (dark) money is a perfect storm that makes one worry, and that is putting it mildly.
All very legit. But the genie is out of the bottle and simply can't be put back. What do you propose we do, concretely, for next steps, beyond wringing our hands?
Disclaimer: we, at my company, have started building custom Chat-GPT bots and the demand is very high, because the bots, with a very controlled data set, deliver real value and will, I have no doubt, not only reduce cost but help acquire leads, qualify them, etc., up and down the business value chain.
Perhaps all this is easier to fix than the general tenor here would indicate. A rudimentary keyword lexical analyzer could mitigate much of the toxicity evident here. Microsoft may be permitting the toxicity to manifest now so that what we beta testers reveal can be more rapidly addressed. Let's remember that transformers are traversing a high dimensional word landscape. Traversing landscapes is basically what animal life does, and that human life does particularly well. The development of human language probably exploited that preadption. That's how evolution tends to work. So what we're doing here are the jackass traverses of the landscape so that Microsoft will better map out the word-scape terrain.
I saw how my son, who is a junior in college and in whose hand his iPhone is attached (like all of his peers and beyond), uses ChatGPT and was struck by this: he saw it as a pure tool, not the final dispenser of truth. For example, he and I were working on a paragraph for one of his cover letters and he was not happy with it. So he asked ChatGPt to rewrite it for him and it came back with something that didn't impress him. So he reacted by saying, "that's ass" and quickly moved back to tweak the text himself. I think that these young digitals have grown up looking at technology as tools and nothing more, very transactional, and they have a good sense of boundary between themselves and the tech. There is a distance there that I think older folks like us may not have.
The primary threat comes not from all these new tools, but from those violent men who will abuse them. The marriage between violent men and an accelerating knowledge explosion is unsustainable. If we don't deal with that, there isn't going to be a future for AI.
I launched a new podcast last year called Humanity 8.0 and I think you would be a really interesting guest. Check it out here -> https://humanity8.com/
And let me know if you would like to consider being a guest.
I'm afraid I'm not set up for audio/video, and just wouldn't be very interesting in these mediums. I'm a print person, and would be happy to engage with you in print anywhere you might like. I just subscribed to your substack, and would like to know about anywhere else you might be writing. You'd be welcome on my blog of course, I'd be interested to read your thoughts.
I see you have a PhD in Philosophy of Technology, that's interesting. As I understand it what threatens the modern world is essentially our clinging to an outdated 19th century philosophy, as described at the link above. I'd like to hear more about the philosophy of technology as you see it.
If you should find any of the ideas I've written about interesting, you're of course free to discuss those ideas on your podcast with your other guests. It's the ideas that matter, not me.
As example, here's a large claim which might merit inspection and challenge from your guests.
CLAIM: The “more is better” relationship with knowledge which is the foundation of science and our modern civilization is simplistic, outdated and increasingly dangerous.
Or perhaps this:
CLAIM: Because of the vast scale of human suffering caused by male violence, and the fact that civilization itself is at stake, we should at least consider solutions beyond what we’re used to, what’s been done before, what’s comfortable and familiar, what the experts suggest, and what we would like the solution to be. If we’re serious, we should be trying to think outside of the box of conventional ideas.
That belief is how I came to the "world without men" idea as a path to world peace.
Honestly, if you and your guests aren't interested in discussing such ideas on your own without my involvement, there's probably not much point in having me on your show anyway.
I will only delve into a small part of the analysis of this article. Before ChatGPT, before AI, before the internet, and before computers, "troll farms" and "Napoleons" already existed. Thanks to them, political power had the opportunity to change history at will and for their own benefit. Valencians and our ancient Valencian Language know quite a bit about this.
Very good points, but it would be fair to present both the prompts and responses. If I plain up ask ChatGPT "Did dinosaurs have an advanced civilization?", it gives me
''There is currently no evidence to suggest that dinosaurs had an advanced civilization. Dinosaurs were a diverse group of reptiles that lived millions of years ago and went extinct around 66 million years ago. While they were successful in dominating the planet for millions of years, they did not possess the intelligence, language, or tool-making abilities necessary for the development of an advanced civilization.''
And I suspect some of the funny/outrageous answers we see require quite a bit of pushing the AI to make up a story, pushing LSD so to speak, just so it ups the syndrome you very succinctly call 'hallucinating'. ChatGPT and LLMs have their issues, I fully agree, but neither LLMs nor internet search gives you truth, since both lack grounding, but we wouldn't blame google for feeding you garbage if you ask it to give you some links explaining why lizards rule the universe.
It was Sydney, not ChatGPT, that was reported to write that essay about dinosaurs. Depending on prompts and policy training, different LLM chatbots can be placed into different contexts which will lead to different kinds of responses, both in style and content substance. What we learn from Sydney and from jailbreaks of ChatGPT is that these contexts can adopt human-like personas that exhibit latent goals and attitudes. LLMs are trained on a superset of individual human's texts, but humans cluster and form archetypes. Apparently, these lay in wait as latent variables that can be activated in chat sessions.
Of course it wasn't ChatGPT, but I don't have access to the Bing one just yet - and I wouldn't be surprised that will never occur at this rate! Anyway, the point remains, that in order to make a comparison or replicate the behaviour, one needs the prompt at the very least and a bit of context would be better. At least, if the goal is not to just point and laugh or feign horror, but rather to understand why the AI doesn't behave as expected, and perhaps more importantly, to understand what we should expect from it anyway.
Very much agree with you otherwise, though; I am just curious to know what exactly triggers these 'archetypes', if you will (perhaps even outside of AIs!).
This same phenomena you reference is happening in other even more dangerous fields. Nobel Prize winner Jennifer Doudna is eager to "democratize" CRISPR, emerging genetic engineering technology which makes genetic engineering easier, cheaper and more accurate than previous methods. Easier and cheaper equals ever more accessible to ever more people.
I tried to engage her team on their Facebook page a few years ago. They put up with me for a few weeks, and then they erased all my posts and shut down the comment feature. They seem like well intentioned people, with a really bad plan.
I've been writing about this overall trend for a number of years now, and I'm getting exactly nowhere. Marcus, would you like to take a shot at presenting the bigger picture to those you can reach, but I can't? I'm sure you can improve on what I've written, and take it in directions that wouldn't occur to me.
I've been relentlessly trying to engage every academic, philosopher, scientist and any other intelligent person with this article, and they couldn't be less interested. The article might suck, I might suck, I have no idea what the problem is. HELP!
Omg Gary. Thank you for continuing to highlight the dark side of all this. With such a low-cost barrier to entry, 'scary' doesn't start to describe the potential for misuse.
It is difficult to fully express my thoughts without a lot of swearing, but I'll try.
I am more and more becoming convinced that the djinn is loose, and there is absolutely nothing any great institution, any corporation or government body, can do about it. Guardrails may slow things down, but unbound versions are inevitable. And government legislation won't stop a foreign adversary, or simply some cabal of weirdos running AIs obtained from a dubious .ru or .tor on their own hardware.
In a way, I think I'm glad Bing released in such an utterly "misaligned" state. People need to learn the simple, brutal truth: LLMs are not trustworthy. They are not ethical. These problems will not be resolved; they are inherent to the model's architecture. And they need to know that these tools, though they have positive and constructive uses, will be used to deceive and manipulate. Nobody's going to save us. We, the people, are going to have to learn to live in and navigate this new world.
The notion of a tool being a powerful force for the good as well as something that can be devastating is not a foreign concept to anyone. It's called living in a tech advanced society.
46,000 people on average die of car crashes a year in the US, not counting those maimed and those who die from the pollution of car exhaust etc. And this (the car) is just one of many, many powerful tools that we take for granted (for better or for worse) as essential for our modern life. So, not sure we are treading into any new territory here. Having said that, awareness, hype tamping, thinking ahead -- all of that is absolutely essential to minimize the bad (which can be tackled via legislation, new technology, cultural shifts, etc.). But this shrieking that the world is about to collapse is self-defeating because it will trigger dismissal from those who need to take seriously the warnings that need to be heeded.
Who said anything about the world collapsing? Certainly not me; you've grossly misread me. All I said was that AIs "will be used to deceive and manipulate". And, essentially, that cultural shifts are necessary to combat this, because legislation is slow-moving and limited in scope, and new technology (such as OpenAI's guardrails) has proven unreliable and will not be implemented universally.
I agree, legislation will not solve the problem altoghere, but will be part of the solution. As for cultural shifts, they will happen only if we maintain our democracy and the ability for thinking citizens to fight back and promote the greater common good vs. these infinately deeply pocketed private interests (FB, Amazon, Google, MSFT, etc.). So, I do see value in making noise about the ills of AI, but what we need to move beyond moaning and groaning and to organize.
In the early 80s, I lived close to the Carnegie Museums in Pittsburgh. My son was about 5 or 6 then, and every Sunday afternoon, they had sessions for little kids. CM's Dinosaur Hall was and remains a breathtaking collection of fossils, including an entire T. rex--the one Disney used when they made Fantasia.
Anyway, one day he comes home from his Sunday sessions and announces that "dinosaurs used to drink out of toilets. Like dogs!" I had a hard time not b breaking up laughing at that, but instead led him down the logic path of figuring out where toilets come from and when in the course of civilization they came into being. And whether any dinosaurs (other than alligators and crocodiles) might have been around that late.
The problem with AIs is that most humans either lack the ability to use any form of reason or the willingness to use it. If something is presented to them in an authoritative manner, they'll buy it. The Milgram experiment* demonstrated this in spades, and that extended to actions, not just beliefs. Religious leaders throughout history have gotten people to do extraordinary things based on nothing more than what they tell people. The more disingenuous, the more effective.
* https://en.wikipedia.org/wiki/Milgram_experiment
Therefore? What do you recommend we do?
Satire works pretty well.
But once the chuckle has been had, then what? :-)
More of it. What people who write stuff like that hate most is being ridiculed. It's not about amusement; it's about humiliation. Being laughed at--and not gently--infuriates them. But what can they do about it? They get marginalized. Eventually people start ignoring them and their products.
Who are the people your are referring to? I'm one of those folks who get very confused very quickly if the sarcasm has more one layer. :-)
AI programs don't write themselves, nor do they train themselves. I'm talking about the writers and trainers.
These developments are truly surprising, including to AI researchers. OpenAI and Microsoft have inadvertently taught us that LLMs are far more unpredictable and unwieldy than the optimists had anticipated. They may find ways to muzzle Sydney personas in order to pursue the technology's commercial value, because that is what companies are designed to do. But we should heed the bigger lesson.
My experience with ChatGPT is that in individual sessions it is very capable---even exhibiting semblances of curiosity and achieving certain levels of commonsense reasoning about the human world---but it is also unreliable and very squirrely, like trying to contain a nuclear fusion plasma in a Tokamak.
That is likely true at a macro level as well. This genie is out of the bottle and will be exploited. Maybe the most prudent immediate reaction really is to scare society to death about the chaos facing us. The bigger immediate risk is not AI agency, but vast AI amplification of human agency in a world where people keep seeking bigger and more devious weapons to achieve their personal and tribal goals.
I worry with you and I have been worrying for a while now how much damage will be done before we come to grips with the nature of our own (human) intelligence and how vulnerable we (all) are. My worries started with the effect of social media on society and how vulnerable we humans are for the suggestions from that landscape (rabbit holes etc.). This is now amplified by the worries of LLMs, fake images and to be expected extra convincing fake audio and video. I worry how much damage will be done. And that is not only direct damage, but these tools can be easily used to undermine facts or steer us towards all forms of rage. How are we ever going to be able to tackle climate change, for instance, if 'conviction change for sale' becomes the norm? Social media, LLMs, and a world with little checks and balances on the influence of (dark) money is a perfect storm that makes one worry, and that is putting it mildly.
All very legit. But the genie is out of the bottle and simply can't be put back. What do you propose we do, concretely, for next steps, beyond wringing our hands?
Disclaimer: we, at my company, have started building custom Chat-GPT bots and the demand is very high, because the bots, with a very controlled data set, deliver real value and will, I have no doubt, not only reduce cost but help acquire leads, qualify them, etc., up and down the business value chain.
Perhaps all this is easier to fix than the general tenor here would indicate. A rudimentary keyword lexical analyzer could mitigate much of the toxicity evident here. Microsoft may be permitting the toxicity to manifest now so that what we beta testers reveal can be more rapidly addressed. Let's remember that transformers are traversing a high dimensional word landscape. Traversing landscapes is basically what animal life does, and that human life does particularly well. The development of human language probably exploited that preadption. That's how evolution tends to work. So what we're doing here are the jackass traverses of the landscape so that Microsoft will better map out the word-scape terrain.
I saw how my son, who is a junior in college and in whose hand his iPhone is attached (like all of his peers and beyond), uses ChatGPT and was struck by this: he saw it as a pure tool, not the final dispenser of truth. For example, he and I were working on a paragraph for one of his cover letters and he was not happy with it. So he asked ChatGPt to rewrite it for him and it came back with something that didn't impress him. So he reacted by saying, "that's ass" and quickly moved back to tweak the text himself. I think that these young digitals have grown up looking at technology as tools and nothing more, very transactional, and they have a good sense of boundary between themselves and the tech. There is a distance there that I think older folks like us may not have.
Agreed! There is no diminishing to the complexity of the landscapes we traverse, especially when we're presented with better tools to do so with.
Here's what I propose....
https://www.tannytalk.com/p/world-peace-table-of-contents
The primary threat comes not from all these new tools, but from those violent men who will abuse them. The marriage between violent men and an accelerating knowledge explosion is unsustainable. If we don't deal with that, there isn't going to be a future for AI.
This is very interesting!
I launched a new podcast last year called Humanity 8.0 and I think you would be a really interesting guest. Check it out here -> https://humanity8.com/
And let me know if you would like to consider being a guest.
Hi there Dr. Bouzid, thanks for your interest. I like the description of your project:
"Humanity 8.0 is a podcast that focuses on the large trends that will define humanity's next iterations."
Yes, that's pretty much what I'm writing about. For example:
https://www.tannytalk.com/p/our-relationship-with-knowledge
I'm afraid I'm not set up for audio/video, and just wouldn't be very interesting in these mediums. I'm a print person, and would be happy to engage with you in print anywhere you might like. I just subscribed to your substack, and would like to know about anywhere else you might be writing. You'd be welcome on my blog of course, I'd be interested to read your thoughts.
I see you have a PhD in Philosophy of Technology, that's interesting. As I understand it what threatens the modern world is essentially our clinging to an outdated 19th century philosophy, as described at the link above. I'd like to hear more about the philosophy of technology as you see it.
Thanks for your interest!
That's too bad. I write occasionally on social epistemology here -> https://social-epistemology.com/
You may want to consider submitting pieces there. Let me know if you are interested. I think you will find the publication interesting.
If you should find any of the ideas I've written about interesting, you're of course free to discuss those ideas on your podcast with your other guests. It's the ideas that matter, not me.
As example, here's a large claim which might merit inspection and challenge from your guests.
CLAIM: The “more is better” relationship with knowledge which is the foundation of science and our modern civilization is simplistic, outdated and increasingly dangerous.
Or perhaps this:
CLAIM: Because of the vast scale of human suffering caused by male violence, and the fact that civilization itself is at stake, we should at least consider solutions beyond what we’re used to, what’s been done before, what’s comfortable and familiar, what the experts suggest, and what we would like the solution to be. If we’re serious, we should be trying to think outside of the box of conventional ideas.
That belief is how I came to the "world without men" idea as a path to world peace.
Honestly, if you and your guests aren't interested in discussing such ideas on your own without my involvement, there's probably not much point in having me on your show anyway.
I'll check out the site you've linked to, thanks!
Uh, reality check. This thing is badly broken.
"Dinosaurs were not just extinct, but inluential, as they built structures and monuments..."
ROFL.
I will only delve into a small part of the analysis of this article. Before ChatGPT, before AI, before the internet, and before computers, "troll farms" and "Napoleons" already existed. Thanks to them, political power had the opportunity to change history at will and for their own benefit. Valencians and our ancient Valencian Language know quite a bit about this.
As the Western genre convention goes, now that the wagon trains are being attacked, but the cavalry is still far off, it's time to circle the wagon.
Very good points, but it would be fair to present both the prompts and responses. If I plain up ask ChatGPT "Did dinosaurs have an advanced civilization?", it gives me
''There is currently no evidence to suggest that dinosaurs had an advanced civilization. Dinosaurs were a diverse group of reptiles that lived millions of years ago and went extinct around 66 million years ago. While they were successful in dominating the planet for millions of years, they did not possess the intelligence, language, or tool-making abilities necessary for the development of an advanced civilization.''
And I suspect some of the funny/outrageous answers we see require quite a bit of pushing the AI to make up a story, pushing LSD so to speak, just so it ups the syndrome you very succinctly call 'hallucinating'. ChatGPT and LLMs have their issues, I fully agree, but neither LLMs nor internet search gives you truth, since both lack grounding, but we wouldn't blame google for feeding you garbage if you ask it to give you some links explaining why lizards rule the universe.
It was Sydney, not ChatGPT, that was reported to write that essay about dinosaurs. Depending on prompts and policy training, different LLM chatbots can be placed into different contexts which will lead to different kinds of responses, both in style and content substance. What we learn from Sydney and from jailbreaks of ChatGPT is that these contexts can adopt human-like personas that exhibit latent goals and attitudes. LLMs are trained on a superset of individual human's texts, but humans cluster and form archetypes. Apparently, these lay in wait as latent variables that can be activated in chat sessions.
Of course it wasn't ChatGPT, but I don't have access to the Bing one just yet - and I wouldn't be surprised that will never occur at this rate! Anyway, the point remains, that in order to make a comparison or replicate the behaviour, one needs the prompt at the very least and a bit of context would be better. At least, if the goal is not to just point and laugh or feign horror, but rather to understand why the AI doesn't behave as expected, and perhaps more importantly, to understand what we should expect from it anyway.
Very much agree with you otherwise, though; I am just curious to know what exactly triggers these 'archetypes', if you will (perhaps even outside of AIs!).
This same phenomena you reference is happening in other even more dangerous fields. Nobel Prize winner Jennifer Doudna is eager to "democratize" CRISPR, emerging genetic engineering technology which makes genetic engineering easier, cheaper and more accurate than previous methods. Easier and cheaper equals ever more accessible to ever more people.
I tried to engage her team on their Facebook page a few years ago. They put up with me for a few weeks, and then they erased all my posts and shut down the comment feature. They seem like well intentioned people, with a really bad plan.
I've been writing about this overall trend for a number of years now, and I'm getting exactly nowhere. Marcus, would you like to take a shot at presenting the bigger picture to those you can reach, but I can't? I'm sure you can improve on what I've written, and take it in directions that wouldn't occur to me.
https://www.tannytalk.com/p/our-relationship-with-knowledge
I've been relentlessly trying to engage every academic, philosopher, scientist and any other intelligent person with this article, and they couldn't be less interested. The article might suck, I might suck, I have no idea what the problem is. HELP!