I've always been concerned about the hype and misrepresentation of AI but I've struggled making my case when arguing with "less techie" folks about it. Thank you for all you do to cut thru the BS and make it easier for the rest of use to explain the perils of AI to our loved ones by making the issues around it clearer for the layperson. Finding your substack has been sanity saving for me!
Hey look on the bright side. Within a year or two we're going to have twice as much work fixing all that messed up code when the bubble pops, and we'll be able to bill huge hours for it too, because it's going to take forever to untangle the mess of copy-pasted chunks of code or multiple similar but not quite identical objects and functions and whatnot.
There are so many people (including myself) that have mentioned this same impending trainwreck. How many hospitals, utility companies and such would have to shut down while that fixing is going on...
And how many are going to shut down when hackers encrypt everything and hold the data hostage. Yeah it's going to be a mess. Almost guaranteed to cause another decades long AI winter if it's bad enough.
The problem will extend far beyond the companies that used AI to generate their code.
Society will pay the ultimate price as buggy, insecure bot generated code permeates our electrical grid, hospitals, power plants, communications, military and other critical infrastructure.
Thanks for the headsup, Gary! Looks like nothing more than opening public pockets for private benefit, open doors to whatever is thunk up, and boosting energy resources, all in the name of "Maimlining" AI "into the veins of society. What could go wrong? As far as the Sky News reports indicates, nothing. Good to know. It's just that not a lot of good comes from the tried and true substances we usually associate with "mainlining." Neither the word, nor notion, of 'regulation' or caution makes an appearance in the piece of puff pastry - a press release passing as journalism.
It’s going to be amusingly ironic if AI is used by an anti-Oligarchy terrorist grouping, angry about wealth inequality, to plot and deliver a series of assassinations of high profile Silicon Valley executives
Oh yeah? How can you tell? (I'm definitely no conspiracy theorist but I'm convinced that concerns about "AI causing massive disinformation" are materializing nowadays)
Not even wandering on the political arena side (where there is probably much to say), as a former scientist (with engineering background), I'm really flabbergasted by the AI-generated headlines (and articles, when I find the time to read them) that I see daily about "incredible breakthroughs" in domains such as AI (of course), optronics, quantum computing, energy production (but also, less in my domains, biology, ...). If only 10% of these articles were grounded and trustworthy, we would be far better off than we really are ... Some of them are just pure BS, others have titles that overstate by far what research has actually produced ...
I have been studying social media usage since 2011, both as an academic and in more practical circumstances. Sadly, over the past 8 years this has been heavily focused on the “disinformation” arena. I will say with some confidence that genAI is not a significant contributor and is unlikely to become so. There has never been a shortage of BS material or of idiots prepared to amplify it. GenAI tools are added to the armoury of both black and white hats of this tiresome farce, granted, but the key developments are engineered by humans, as ever. It is mildly embarrassing when people like Marcus, whom I admire and largely agree with, weigh in with predictions in a field, the details of which are, perfectly reasonably, not familiar to them.
You may both be right, Digitaurus and Andy, (and I would hope so) but at least GenAI lowers the cost/effort threshold to produce disinformation. This is already worrisome enough.
Yet, I would agree that the countermeasures lie elsewhere, and first, in really educating people in thinking, cross-checking facts and being critical of the information they get. Unfortunately that the is not the direction societies are heading to, right now ... On the contrary, preferring hallucinating AI's to the results of a more classical search engine is already a sign of laziness (yes, I know: "no time to do it by myself ..." - how sad!)
Sure, the danger of AI hiring a hitman is low. For one thing, there are virtually no actual assassins-for-hire in the real world. But the point is, if these agents are willing to contract a murder, what won't they do?
If we care for regulating AI and making sure it does not cause damage, it is worth weighing the risks by how near-term they are and how grave they are. Panic about wildly hypothetical scenarios doesn't help.
If I understand how LLMs work, what we're seeing is a median opinion of how planning and executing and assassination should work from scraped text off of social media and news sites of all levels of accuracy.
So this LLM will confidently state that it can plan an assassination in the same way that all the people who spout off on the Internet about how they could TOTALLY plan and execute an assassination plot confidently state that THEY can plan an assassination.
Seems like a great way to help the FBI fill up their quota of stupid criminals.
Putting these algorithms in charge of health care decisions seems like a much greater threat to human life to me.
UK PM Sir Keir Starmer wants to mainline AI in society. At the same time he wants to make Brexit work. May I suggest that he first asks AI how to make Brexit work? If that doesn't cure him from his silly plans I don't know what will.
I'm quite worried about a different security threat, being an educator. I wonder if students and teachers alike in a future classroom will be checking ChatGPT or Google's Gemini or something for answers, and if Russian hackers just for fun will hack ChatGPT to say that 2+2 is 5 in some way, and then everyone in class will start nodding along to completely wrong statements. Just because someone thought it'd be funny to ruin American education and screw up our ability to think critically even more. We live in an international world with international enemies, some of whom have very bad senses of humor. I think there are a number of other security threats you could imagine. And everything just seems to be going down the tubes in this sense, as Americans buy more and more BS if it's said by something or someone authoritative. These hacks wouldn't even be noticeable as huge problems right away. Imagine that a classroom teacher asks students to do research on a public figure and the first thing they do is ask ChatGPT and it starts lying because someone in some country thought it was funny.
Gary, before you wrote about LLMs as being "automplete on steroids". But then how does the agency get in here? I mean, can you explain in simple terms how much of it is coded in and how much mimicry of things scraped from the internet?
This is a helpful place to start. Agency is a bit overstated when it comes to LLM's in my opinion, and what many consider to be agentic AI cases are actually workflows.
wait though, this does make it seem like AI agents (used in business) are ready for prime time...didn't you just post that you thought they were still a while off?
Labour’s UK strategy for AI is little more than some compute power and apprenticeships. I wouldn’t worry about any risks from it, it’s just another wet fart from people with the imagination of a shop steward.
Oh, and I would have thought it was obvious if you can have AI red teaming you can also have AI blue teaming. The pair of them together should seriously improve security.
No AI without semantic/symbolic AI. Generative AI is code not magic. Hinton is the grandfather of a neat algorithm not AI. The sooner it is understood that semantic AI at massive scale is in reach the sooner we can get on with building safe AI.
Code and data are all we are discussing. Generative AI has a massive data quality problem. So did SQL. IBM spent billions on relational model with a logic foundation (Codd,Date). But SQL did not work to guarantee answers were correct. A second technology was needed to extract transform and load (ETL) before the data integration industry took off. Not an analogy. ETL required accurate data to function. Random errors sunk it. Generative AI does not have data let alone accurate data.
I am glad Microsoft does this, even though it is a bit of a "fox and henhouse" situation. People I know are often very reluctant to read the safety and security literature specifically on these systems for whatever reason, and punted to places like our policy arm or Microsoft only, so far. I keep trying to get more traction, including our host's work, but ... maybe this will help!
I've always been concerned about the hype and misrepresentation of AI but I've struggled making my case when arguing with "less techie" folks about it. Thank you for all you do to cut thru the BS and make it easier for the rest of use to explain the perils of AI to our loved ones by making the issues around it clearer for the layperson. Finding your substack has been sanity saving for me!
and on top of that, AI "writes" code that's unsecure and easily hacked. Technical debt up the wazoo for any naive org that dares the shortcutting. https://davidhsing.substack.com/p/ai-replacing-coders-not-so-fast
Hey look on the bright side. Within a year or two we're going to have twice as much work fixing all that messed up code when the bubble pops, and we'll be able to bill huge hours for it too, because it's going to take forever to untangle the mess of copy-pasted chunks of code or multiple similar but not quite identical objects and functions and whatnot.
There are so many people (including myself) that have mentioned this same impending trainwreck. How many hospitals, utility companies and such would have to shut down while that fixing is going on...
And how many are going to shut down when hackers encrypt everything and hold the data hostage. Yeah it's going to be a mess. Almost guaranteed to cause another decades long AI winter if it's bad enough.
The problem will extend far beyond the companies that used AI to generate their code.
Society will pay the ultimate price as buggy, insecure bot generated code permeates our electrical grid, hospitals, power plants, communications, military and other critical infrastructure.
Thanks for the headsup, Gary! Looks like nothing more than opening public pockets for private benefit, open doors to whatever is thunk up, and boosting energy resources, all in the name of "Maimlining" AI "into the veins of society. What could go wrong? As far as the Sky News reports indicates, nothing. Good to know. It's just that not a lot of good comes from the tried and true substances we usually associate with "mainlining." Neither the word, nor notion, of 'regulation' or caution makes an appearance in the piece of puff pastry - a press release passing as journalism.
(De)generative AI is the latest generation of designer hopioids to hit the streets.
It soothes the masses and makes the dealers incredibly wealthy.
Move over Fentanyl.
Make way for Gentanyl
The canaries singing in the mines? Pretty scary future world.
It’s going to be amusingly ironic if AI is used by an anti-Oligarchy terrorist grouping, angry about wealth inequality, to plot and deliver a series of assassinations of high profile Silicon Valley executives
quantum risk - a wicked problem that emerges at the boundaries of our data dependency
https://opengovernance.net/quantum-risk-a-wicked-problem-that-emerges-at-the-boundaries-of-our-data-dependency-2dc36dfb21fb
Aw man. Request; Put it on substack, I want to read it!
AI is software. Smart but fragile software that can amply people, get misused, or malfunction.
The danger of AI planning assassinations is vastly overstated for the foreseeable future.
For the record, the concern of AI causing massive disinformation campaigns did not materialize yet.
Oh yeah? How can you tell? (I'm definitely no conspiracy theorist but I'm convinced that concerns about "AI causing massive disinformation" are materializing nowadays)
Not even wandering on the political arena side (where there is probably much to say), as a former scientist (with engineering background), I'm really flabbergasted by the AI-generated headlines (and articles, when I find the time to read them) that I see daily about "incredible breakthroughs" in domains such as AI (of course), optronics, quantum computing, energy production (but also, less in my domains, biology, ...). If only 10% of these articles were grounded and trustworthy, we would be far better off than we really are ... Some of them are just pure BS, others have titles that overstate by far what research has actually produced ...
What you see about "incredible breakthroughs" is people hyping things up. That is a very old strategy.
I am not saying misinformation does not exist. I am saying that AI added precious little to that.
I have been studying social media usage since 2011, both as an academic and in more practical circumstances. Sadly, over the past 8 years this has been heavily focused on the “disinformation” arena. I will say with some confidence that genAI is not a significant contributor and is unlikely to become so. There has never been a shortage of BS material or of idiots prepared to amplify it. GenAI tools are added to the armoury of both black and white hats of this tiresome farce, granted, but the key developments are engineered by humans, as ever. It is mildly embarrassing when people like Marcus, whom I admire and largely agree with, weigh in with predictions in a field, the details of which are, perfectly reasonably, not familiar to them.
You may both be right, Digitaurus and Andy, (and I would hope so) but at least GenAI lowers the cost/effort threshold to produce disinformation. This is already worrisome enough.
Yet, I would agree that the countermeasures lie elsewhere, and first, in really educating people in thinking, cross-checking facts and being critical of the information they get. Unfortunately that the is not the direction societies are heading to, right now ... On the contrary, preferring hallucinating AI's to the results of a more classical search engine is already a sign of laziness (yes, I know: "no time to do it by myself ..." - how sad!)
Sure, the danger of AI hiring a hitman is low. For one thing, there are virtually no actual assassins-for-hire in the real world. But the point is, if these agents are willing to contract a murder, what won't they do?
If we care for regulating AI and making sure it does not cause damage, it is worth weighing the risks by how near-term they are and how grave they are. Panic about wildly hypothetical scenarios doesn't help.
If I understand how LLMs work, what we're seeing is a median opinion of how planning and executing and assassination should work from scraped text off of social media and news sites of all levels of accuracy.
So this LLM will confidently state that it can plan an assassination in the same way that all the people who spout off on the Internet about how they could TOTALLY plan and execute an assassination plot confidently state that THEY can plan an assassination.
Seems like a great way to help the FBI fill up their quota of stupid criminals.
Putting these algorithms in charge of health care decisions seems like a much greater threat to human life to me.
UK PM Sir Keir Starmer wants to mainline AI in society. At the same time he wants to make Brexit work. May I suggest that he first asks AI how to make Brexit work? If that doesn't cure him from his silly plans I don't know what will.
If Musk’s bot ever becomes autonomous, it won’t need to hire a hit man.
It will be able to do the job itself
I'm quite worried about a different security threat, being an educator. I wonder if students and teachers alike in a future classroom will be checking ChatGPT or Google's Gemini or something for answers, and if Russian hackers just for fun will hack ChatGPT to say that 2+2 is 5 in some way, and then everyone in class will start nodding along to completely wrong statements. Just because someone thought it'd be funny to ruin American education and screw up our ability to think critically even more. We live in an international world with international enemies, some of whom have very bad senses of humor. I think there are a number of other security threats you could imagine. And everything just seems to be going down the tubes in this sense, as Americans buy more and more BS if it's said by something or someone authoritative. These hacks wouldn't even be noticeable as huge problems right away. Imagine that a classroom teacher asks students to do research on a public figure and the first thing they do is ask ChatGPT and it starts lying because someone in some country thought it was funny.
Gary, before you wrote about LLMs as being "automplete on steroids". But then how does the agency get in here? I mean, can you explain in simple terms how much of it is coded in and how much mimicry of things scraped from the internet?
Anthropic has written a summary guide to LLM workflows vs agents here: https://www.anthropic.com/research/building-effective-agents
This is a helpful place to start. Agency is a bit overstated when it comes to LLM's in my opinion, and what many consider to be agentic AI cases are actually workflows.
Don't worry, once Sam Altman ships thousands of those agents to customers, we will figure out how to make them safe.
wait though, this does make it seem like AI agents (used in business) are ready for prime time...didn't you just post that you thought they were still a while off?
Labour’s UK strategy for AI is little more than some compute power and apprenticeships. I wouldn’t worry about any risks from it, it’s just another wet fart from people with the imagination of a shop steward.
Oh, and I would have thought it was obvious if you can have AI red teaming you can also have AI blue teaming. The pair of them together should seriously improve security.
No AI without semantic/symbolic AI. Generative AI is code not magic. Hinton is the grandfather of a neat algorithm not AI. The sooner it is understood that semantic AI at massive scale is in reach the sooner we can get on with building safe AI.
http://aicyc.org/2024/12/22/no-agi-without-semantic-ai/
http://aicyc.org/2024/12/11/sam-implementation-of-a-belief-system/
The commercial version is offered at intellisophic.net.
The public benefit is described at aicyc.org
Code and data are all we are discussing. Generative AI has a massive data quality problem. So did SQL. IBM spent billions on relational model with a logic foundation (Codd,Date). But SQL did not work to guarantee answers were correct. A second technology was needed to extract transform and load (ETL) before the data integration industry took off. Not an analogy. ETL required accurate data to function. Random errors sunk it. Generative AI does not have data let alone accurate data.
I am glad Microsoft does this, even though it is a bit of a "fox and henhouse" situation. People I know are often very reluctant to read the safety and security literature specifically on these systems for whatever reason, and punted to places like our policy arm or Microsoft only, so far. I keep trying to get more traction, including our host's work, but ... maybe this will help!