50 Comments

We don't even need to regulate AI to stop insurance company ripoffs. All we need to do is institute guaranteed healthcare as every single other modern country on Earth has done (and some that aren't so modern) and voila.

Expand full comment

greetings from my adopted home of Canada!

Expand full comment

It is that simple. Free at the point of use. Duh.

Expand full comment

Looking upstream to the provider piece, since healthcare is an incredibly human labor effort, you'll need to set standards for pay that will amount to enslavement for those who become doctors and nurses. Free to you means no or surely lessened remuneration for those who treat you.

Expand full comment

Keep fighting the good fight, Gary!

Expand full comment

It is of course not at all ironic that OpenAI, a company founded to avoid the evils of AI, would be the one to develop military AI that may eventually kill us all off. Where have we seen this movie? (Hint: T2).

Expand full comment

All US big tech is beholden to the military industrial complex. Nothing changes.

Expand full comment

well, not all, and certainly not right away. OpenAI has made many choices along the way, as the Bloomberg piece highlights.

Expand full comment

AI applied to nuclear war....

Expand full comment

Indeed, I wrote about that scenario in this piece for Scientific American https://www.scientificamerican.com/article/has-ai-already-brought-us-the-terminator-future/

Expand full comment

It’s unreliable and pushy.

And the ads are all about how you can use it to avoid doing the one thing we need to do more - talk to other humans.

Had a problem with a badly designed procurement website. Instead of struggling, I went over and asked a lovely admin who supports the top executives. She solved my problem immediately. And I got to compliment her, thank her, and have a positive human interaction. Imagine that.

Expand full comment

I recently tried to reach customer service at Autodesk, the maker of AutoCAD for a licensing issue I was having. The company has deployed the most insufferable chatbot imaginable which is totally useless at a practical level.

I finally got through to a person. It was such a relief, and was so much more efficient. She was able to help me immediately and had my issue resolved within just a few minutes.

This tech stuff is total bullshit. People and human interaction will always be the answer.

Expand full comment

Insurance companies have never needed AI to be a big pain in the a--

Expand full comment

I wonder how much the use of AI in health insurance, and similar situations, is driven by AI having infinite tolerance to causing customers pain. It avoids having actual human employees screw people out of coverage and eliminates the subsequent hiring and turnover problems. It's modern healthcare's version of the firing squad or using multiple executioners where only one unknowingly delivers the poison.

Expand full comment

…keep saying build a “better more dependable AI”. Consider if AI is even a technology worthy of humanity! It is sucking all the oxygen out of investing in other technologies and innovation. Could we not be spending way too much time, money and effort on something that all the evidence is showing that it can’t live up to the hype. We could be delaying other great innovations and technologies to “save” an unworthy and unsafe “AI.”

The level of regulation and trust required to make AI “safe” is increasingly not looking realistically achievable. Fascination has got in the way of rational logical thinking or basic common sense.

Expand full comment

"And there is a possible world in which we take a breath and ask how we can build a better, more reliable AI that can actually serve society, taking steps to make sure that it is used safely, equitably, and without causing harm." By what mechanism do you foresee this outcome as possible?

Expand full comment

As we watch quadcopters murder people in Gaza, and are exposed to videos (both which I will not watch) of Russian or Ukrainian infantry trying helplessly to run away from drones do we not think that this is coming for us too? The darkness envelopes

Expand full comment

In related news, various organizations such as the Arms Control Association and the Union of Concerned Scientists, strongly supported a Congressional effort to ban the use of AI in the command and control process involving the use of nuclear weapons. We advocate always having a human in the loop, and point to past near misses due to equipment or software malfunctions or errors as justification. Unfortunately, the bill failed to muster sufficient support.

Could it be that Congress trusts a bot they do not comprehend more than the incoming President?

Expand full comment

We were told AI could develop new cures; instead it is used to prevent people from getting cured.

Expand full comment

I surmise that since OpenAI don't yet see on the horizon any civilian killer app for the current SOTA GenAI that can form a reliable source of profits, and since they are burning huge cash on daily basis in training and running these giant predictive statistical behemoths, they are now desperate to push ahead with whatever opportunities come their way.

The desperate search for a GenAI based Killer APP has failed, so now the only way for OpenAI to sustain & thrive is by developing a GenAI based App to Kill.

Expand full comment

OK, so LLMs are a toy, they are never going to become importnt. But what you continually evade is the necessity to emulate the human's Unconscious Mind. This is going to be a lot of work. Most of the people working in AI are attempting to treat it like a program, or doing dumb things like using static devices in dynamic situations (ML for autonomous vehicles, anyone?). It requires a completely different approach - every time someone comes up with a shortcut (Prolog, Expert Systems, LLMs), it pushes the goal further away. That differtent approach is going to require completely different people, who are going to require at least 10 years of training.. The easy methods (carryovers from programming) aren't going to work..

You were a professor of psychology, it should be right up your alley.

https://semanticstructure.blogspot.com/2024/12/llms-and-military.html

Expand full comment

It's going to be more like a hundred years, because at the moment we have no clue at all how to do it. You can't train people on what isn't known.

Expand full comment

The IDF is fighting the first AI based inhuman war, with drone based imaging used to identify and track ostensible combatants and implement "optimized" aerial weapons targeting, with an allowable algorithmic weighting of ten civilians estimated per footsoldier and 100 civilians for opposing 'commanders'. The resulting reported death tolls are based on body counts, and therefore under-reported by the numbers of dead buried under rubble. The bird's eye view from thousands of meters high has killed record numbers of medical personnel, humanitarian workers, journalists, women, and children - this from reporting by Deutsche Welle, who if anything is biased towards the occupiers.

Expand full comment

And where does DW get that information from? Gaza and anyone who doesn't report what Hamas wants gets shot. So it's not real information.

Expand full comment

Yes, hard to see how we avoid dystopia at this point -- unless some AI-caused or associated massive disaster happens soon and the world gets smart about a robust/with big teeth int'l treaty system akin to the nuclear weapons treaty system. And stat.

Expand full comment

LLMs have plateaued, so the "bleak future" is on hold, unlikely to progress much further without substantial changes to the technical approaches they are using, which will take time. Add 20-30 years to the dates on the "positive future" slide and we just might be in with a chance.

Expand full comment

Whiule I agree about the plateauing, I don't agree that LLMs - as weak as they are - are very well handy in creating very dystopian outcomes.

If you read Kafka, then you know that no LLM will ever be as vicious in denying somebody their rights because of formalities or minutiae, but they are very well capable of doing it just because they are ill-trained or instructed to.

Expand full comment

If it's less than superintelligent, then humans can always outsmart it, and we can always switch it off. LLMs are less than superintelligent, and therefore not capable of successfully instigating the worst possible dystopian outcomes.

Expand full comment

True, but people using them are capable of doing so and in many cases they won't switch them off. The blurring of the definitions of algorithms and AI means that senior managers can use algorithms as a cut-out between themselves and the consequences of their decisions. 'AI-powered' = We take no responsibility for our terrible actions.

Expand full comment

have you used Claude? Not at all plateaued.

Expand full comment

I just asked Claude to multiply 3857389959 by 2358002388, and it very confidently gave me the wrong answer.

https://ibb.co/RB8qkxB

Expand full comment

Brace yourself for disappointment! :-)

Expand full comment