We don't even need to regulate AI to stop insurance company ripoffs. All we need to do is institute guaranteed healthcare as every single other modern country on Earth has done (and some that aren't so modern) and voila.
Looking upstream to the provider piece, since healthcare is an incredibly human labor effort, you'll need to set standards for pay that will amount to enslavement for those who become doctors and nurses. Free to you means no or surely lessened remuneration for those who treat you.
It is of course not at all ironic that OpenAI, a company founded to avoid the evils of AI, would be the one to develop military AI that may eventually kill us all off. Where have we seen this movie? (Hint: T2).
And the ads are all about how you can use it to avoid doing the one thing we need to do more - talk to other humans.
Had a problem with a badly designed procurement website. Instead of struggling, I went over and asked a lovely admin who supports the top executives. She solved my problem immediately. And I got to compliment her, thank her, and have a positive human interaction. Imagine that.
I recently tried to reach customer service at Autodesk, the maker of AutoCAD for a licensing issue I was having. The company has deployed the most insufferable chatbot imaginable which is totally useless at a practical level.
I finally got through to a person. It was such a relief, and was so much more efficient. She was able to help me immediately and had my issue resolved within just a few minutes.
This tech stuff is total bullshit. People and human interaction will always be the answer.
I wonder how much the use of AI in health insurance, and similar situations, is driven by AI having infinite tolerance to causing customers pain. It avoids having actual human employees screw people out of coverage and eliminates the subsequent hiring and turnover problems. It's modern healthcare's version of the firing squad or using multiple executioners where only one unknowingly delivers the poison.
…keep saying build a “better more dependable AI”. Consider if AI is even a technology worthy of humanity! It is sucking all the oxygen out of investing in other technologies and innovation. Could we not be spending way too much time, money and effort on something that all the evidence is showing that it can’t live up to the hype. We could be delaying other great innovations and technologies to “save” an unworthy and unsafe “AI.”
The level of regulation and trust required to make AI “safe” is increasingly not looking realistically achievable. Fascination has got in the way of rational logical thinking or basic common sense.
"And there is a possible world in which we take a breath and ask how we can build a better, more reliable AI that can actually serve society, taking steps to make sure that it is used safely, equitably, and without causing harm." By what mechanism do you foresee this outcome as possible?
As we watch quadcopters murder people in Gaza, and are exposed to videos (both which I will not watch) of Russian or Ukrainian infantry trying helplessly to run away from drones do we not think that this is coming for us too? The darkness envelopes
In related news, various organizations such as the Arms Control Association and the Union of Concerned Scientists, strongly supported a Congressional effort to ban the use of AI in the command and control process involving the use of nuclear weapons. We advocate always having a human in the loop, and point to past near misses due to equipment or software malfunctions or errors as justification. Unfortunately, the bill failed to muster sufficient support.
Could it be that Congress trusts a bot they do not comprehend more than the incoming President?
I surmise that since OpenAI don't yet see on the horizon any civilian killer app for the current SOTA GenAI that can form a reliable source of profits, and since they are burning huge cash on daily basis in training and running these giant predictive statistical behemoths, they are now desperate to push ahead with whatever opportunities come their way.
The desperate search for a GenAI based Killer APP has failed, so now the only way for OpenAI to sustain & thrive is by developing a GenAI based App to Kill.
OK, so LLMs are a toy, they are never going to become importnt. But what you continually evade is the necessity to emulate the human's Unconscious Mind. This is going to be a lot of work. Most of the people working in AI are attempting to treat it like a program, or doing dumb things like using static devices in dynamic situations (ML for autonomous vehicles, anyone?). It requires a completely different approach - every time someone comes up with a shortcut (Prolog, Expert Systems, LLMs), it pushes the goal further away. That differtent approach is going to require completely different people, who are going to require at least 10 years of training.. The easy methods (carryovers from programming) aren't going to work..
You were a professor of psychology, it should be right up your alley.
The IDF is fighting the first AI based inhuman war, with drone based imaging used to identify and track ostensible combatants and implement "optimized" aerial weapons targeting, with an allowable algorithmic weighting of ten civilians estimated per footsoldier and 100 civilians for opposing 'commanders'. The resulting reported death tolls are based on body counts, and therefore under-reported by the numbers of dead buried under rubble. The bird's eye view from thousands of meters high has killed record numbers of medical personnel, humanitarian workers, journalists, women, and children - this from reporting by Deutsche Welle, who if anything is biased towards the occupiers.
Yes, hard to see how we avoid dystopia at this point -- unless some AI-caused or associated massive disaster happens soon and the world gets smart about a robust/with big teeth int'l treaty system akin to the nuclear weapons treaty system. And stat.
LLMs have plateaued, so the "bleak future" is on hold, unlikely to progress much further without substantial changes to the technical approaches they are using, which will take time. Add 20-30 years to the dates on the "positive future" slide and we just might be in with a chance.
Whiule I agree about the plateauing, I don't agree that LLMs - as weak as they are - are very well handy in creating very dystopian outcomes.
If you read Kafka, then you know that no LLM will ever be as vicious in denying somebody their rights because of formalities or minutiae, but they are very well capable of doing it just because they are ill-trained or instructed to.
If it's less than superintelligent, then humans can always outsmart it, and we can always switch it off. LLMs are less than superintelligent, and therefore not capable of successfully instigating the worst possible dystopian outcomes.
True, but people using them are capable of doing so and in many cases they won't switch them off. The blurring of the definitions of algorithms and AI means that senior managers can use algorithms as a cut-out between themselves and the consequences of their decisions. 'AI-powered' = We take no responsibility for our terrible actions.
We don't even need to regulate AI to stop insurance company ripoffs. All we need to do is institute guaranteed healthcare as every single other modern country on Earth has done (and some that aren't so modern) and voila.
greetings from my adopted home of Canada!
It is that simple. Free at the point of use. Duh.
Looking upstream to the provider piece, since healthcare is an incredibly human labor effort, you'll need to set standards for pay that will amount to enslavement for those who become doctors and nurses. Free to you means no or surely lessened remuneration for those who treat you.
Keep fighting the good fight, Gary!
It is of course not at all ironic that OpenAI, a company founded to avoid the evils of AI, would be the one to develop military AI that may eventually kill us all off. Where have we seen this movie? (Hint: T2).
All US big tech is beholden to the military industrial complex. Nothing changes.
well, not all, and certainly not right away. OpenAI has made many choices along the way, as the Bloomberg piece highlights.
AI applied to nuclear war....
Indeed, I wrote about that scenario in this piece for Scientific American https://www.scientificamerican.com/article/has-ai-already-brought-us-the-terminator-future/
It’s unreliable and pushy.
And the ads are all about how you can use it to avoid doing the one thing we need to do more - talk to other humans.
Had a problem with a badly designed procurement website. Instead of struggling, I went over and asked a lovely admin who supports the top executives. She solved my problem immediately. And I got to compliment her, thank her, and have a positive human interaction. Imagine that.
I recently tried to reach customer service at Autodesk, the maker of AutoCAD for a licensing issue I was having. The company has deployed the most insufferable chatbot imaginable which is totally useless at a practical level.
I finally got through to a person. It was such a relief, and was so much more efficient. She was able to help me immediately and had my issue resolved within just a few minutes.
This tech stuff is total bullshit. People and human interaction will always be the answer.
Insurance companies have never needed AI to be a big pain in the a--
I wonder how much the use of AI in health insurance, and similar situations, is driven by AI having infinite tolerance to causing customers pain. It avoids having actual human employees screw people out of coverage and eliminates the subsequent hiring and turnover problems. It's modern healthcare's version of the firing squad or using multiple executioners where only one unknowingly delivers the poison.
…keep saying build a “better more dependable AI”. Consider if AI is even a technology worthy of humanity! It is sucking all the oxygen out of investing in other technologies and innovation. Could we not be spending way too much time, money and effort on something that all the evidence is showing that it can’t live up to the hype. We could be delaying other great innovations and technologies to “save” an unworthy and unsafe “AI.”
The level of regulation and trust required to make AI “safe” is increasingly not looking realistically achievable. Fascination has got in the way of rational logical thinking or basic common sense.
"And there is a possible world in which we take a breath and ask how we can build a better, more reliable AI that can actually serve society, taking steps to make sure that it is used safely, equitably, and without causing harm." By what mechanism do you foresee this outcome as possible?
As we watch quadcopters murder people in Gaza, and are exposed to videos (both which I will not watch) of Russian or Ukrainian infantry trying helplessly to run away from drones do we not think that this is coming for us too? The darkness envelopes
In related news, various organizations such as the Arms Control Association and the Union of Concerned Scientists, strongly supported a Congressional effort to ban the use of AI in the command and control process involving the use of nuclear weapons. We advocate always having a human in the loop, and point to past near misses due to equipment or software malfunctions or errors as justification. Unfortunately, the bill failed to muster sufficient support.
Could it be that Congress trusts a bot they do not comprehend more than the incoming President?
We were told AI could develop new cures; instead it is used to prevent people from getting cured.
I surmise that since OpenAI don't yet see on the horizon any civilian killer app for the current SOTA GenAI that can form a reliable source of profits, and since they are burning huge cash on daily basis in training and running these giant predictive statistical behemoths, they are now desperate to push ahead with whatever opportunities come their way.
The desperate search for a GenAI based Killer APP has failed, so now the only way for OpenAI to sustain & thrive is by developing a GenAI based App to Kill.
OK, so LLMs are a toy, they are never going to become importnt. But what you continually evade is the necessity to emulate the human's Unconscious Mind. This is going to be a lot of work. Most of the people working in AI are attempting to treat it like a program, or doing dumb things like using static devices in dynamic situations (ML for autonomous vehicles, anyone?). It requires a completely different approach - every time someone comes up with a shortcut (Prolog, Expert Systems, LLMs), it pushes the goal further away. That differtent approach is going to require completely different people, who are going to require at least 10 years of training.. The easy methods (carryovers from programming) aren't going to work..
You were a professor of psychology, it should be right up your alley.
https://semanticstructure.blogspot.com/2024/12/llms-and-military.html
It's going to be more like a hundred years, because at the moment we have no clue at all how to do it. You can't train people on what isn't known.
The IDF is fighting the first AI based inhuman war, with drone based imaging used to identify and track ostensible combatants and implement "optimized" aerial weapons targeting, with an allowable algorithmic weighting of ten civilians estimated per footsoldier and 100 civilians for opposing 'commanders'. The resulting reported death tolls are based on body counts, and therefore under-reported by the numbers of dead buried under rubble. The bird's eye view from thousands of meters high has killed record numbers of medical personnel, humanitarian workers, journalists, women, and children - this from reporting by Deutsche Welle, who if anything is biased towards the occupiers.
And where does DW get that information from? Gaza and anyone who doesn't report what Hamas wants gets shot. So it's not real information.
Yes, hard to see how we avoid dystopia at this point -- unless some AI-caused or associated massive disaster happens soon and the world gets smart about a robust/with big teeth int'l treaty system akin to the nuclear weapons treaty system. And stat.
LLMs have plateaued, so the "bleak future" is on hold, unlikely to progress much further without substantial changes to the technical approaches they are using, which will take time. Add 20-30 years to the dates on the "positive future" slide and we just might be in with a chance.
Whiule I agree about the plateauing, I don't agree that LLMs - as weak as they are - are very well handy in creating very dystopian outcomes.
If you read Kafka, then you know that no LLM will ever be as vicious in denying somebody their rights because of formalities or minutiae, but they are very well capable of doing it just because they are ill-trained or instructed to.
If it's less than superintelligent, then humans can always outsmart it, and we can always switch it off. LLMs are less than superintelligent, and therefore not capable of successfully instigating the worst possible dystopian outcomes.
True, but people using them are capable of doing so and in many cases they won't switch them off. The blurring of the definitions of algorithms and AI means that senior managers can use algorithms as a cut-out between themselves and the consequences of their decisions. 'AI-powered' = We take no responsibility for our terrible actions.
have you used Claude? Not at all plateaued.
I just asked Claude to multiply 3857389959 by 2358002388, and it very confidently gave me the wrong answer.
https://ibb.co/RB8qkxB
Brace yourself for disappointment! :-)