56 Comments

I press 'like' but I do not like.

Expand full comment

As a psychologist, you knew it would go there due to money. As humanists, we don't want to believe humans can be that pathetic.

BTW their dumping the motto happened a few years ago, not recently.

2025 is the new "1984" - "Ignorance is Strength"

Expand full comment

the motto was dumped before; the news underscores that dumping and directly opens applications they previously considered evil.

Expand full comment

In an increasingly competitive world, money corrupts everything eventually.

Expand full comment

Exactly. And money is how everything gets done to start with.

AI will not arise due to some clever genius figuring out the mind. It will be diligently assembled at great cost, with expectation of making a profit.

Expand full comment

The assumption that supporting the US military is necessarily evil always bugged me. What is your view of the Ukraine conflict? How about the looming invasion of Taiwan by the PRC? Or the PRC attacks in the Philippines and Vietnam? What about cyber defense against the PRC and the Russians? Could an AI system have detected the mapping error in the Kosovo conflict where the US bombed the Chinese embassy? There are many use cases for AI in DoD (and in any bureaucracy). The good or the evil must be considered case by case.

Expand full comment

AI could also have caused problems in military applications. The guaranteed hallucinations fraction is problematic.

Aside from that, war is, unfortunately how nations maintain and establish their rights to exist. But I don't argue with academics about military applications of technology.

Expand full comment

That Waymo cars do well so far gives me hope that with further improvements current AI methods will be made smarter and more reliable.

Expand full comment

Although I don't believe AI is relevant outside math, coding, and the physical sciences, it would be neat to see people prompt some AIs with, "How can we make the world a better place?"

Expand full comment

Nice thought but more precision is called for. "Better" implies valuation, and values are subjective. I wouldn't expect something trained on reams of Internet text and then fine-tuned by a rarefied set of young college grads to behave in a way that aligns with my values.

Expand full comment

I am genuinely puzzled by talk of “AI alignment” for precisely that reason.

Aligned with whose values? Sam Altman’s?

For a field that calls itself “science” there is a great deal of sloppy terminology..

Expand full comment

It's one big circlejerk in search of VC funding. Science is hardly anywhere to be found.

Expand full comment

If it were up to me, I’d call it “AI malignment”

Expand full comment

We should just be thankful that AI companies don’t have our social security and income information

Oh, wait

Expand full comment

Oy vey indeed. Now we can only steer our hopes towards the relief we might get from incompetence, crummy coding and slacking.

You know, things might not be so bad after all!

Expand full comment

You forgot the big ones. Intelligence gathering and propaganda/"shaping opinion". The latter being bots.

Oh. And B scamming. The Nigerian princes and the crypto people and everything else.

Expand full comment

Google's "Don't be evil" was really just code for "Don't be Microsoft".

Like many a disruptive company that scales wildly, that disruption was more about taking over what the incumbent has rather than replacing it with something better.

Expand full comment

Don't really see the problem with using AI for military purpose - everything that can be used as weapon eventually will be used as weapon; AI is no exception. Might as well be the first one in a new arms race, and keep the assured mutual destruction going - to keep peace going.

Expand full comment

So we're simply killing time till the nukes fly?

Expand full comment

We are trying to win time until we can spread wide and far enough that nukes flying won't put us to extinction. Given the nature of human - aggressive, warlike, unpredictable ape - this is the best we can do.

Expand full comment

Far and wide?

Going to Mars isn’t going to help much in that regard.

In fact, conflicts between humans on Mars (an extremely inhospitable place with scarce resources) are likely to be significantly more frequent that they are here on earth. Anyone who thinks it is going to be one big happy family of humans is delusional.

Expand full comment

Anyone who thinks people are going to be one big happy family anywhere is delusional. The solution is to have a lot of smaller families scattered far and wide enough that they can’t kill themselves all at one. All over the Solar system at least, going interstellar at best.

Expand full comment

There is an inherent contradiction in such a scheme.

It might work for a brief time for spreading humans out to a very limited extent on asteroids and Mars within the solar system (from earth)

But it is a self limiting process. Small groups (tribes) focussed on survival won’t have the resources, manpower and motivation to keep spreading.

And spreading beyond the solar system requires developing (nuclear powered) starships that can be accelerated to a significant fraction of light speed (and then decelerated) relative to the destination planet.

So my guess is that the spreading will quickly die out along with the humans.

The whole idea is illconceived, in my opinion. If humans can’t make it in paradise here on earth where we have everything we could possibly need, I seriously doubt we will make it anywhere else(unless one considers small pockets of humans hither and thither just barely hanging on “making it”)

Expand full comment

If your argument was truthful, humanity would never rise from small groups focused on survival to colonize the whole Earth and build civilization we have now. Technology will give us ways to adapt to new environments - perhaps by building smaller "paradises" to fit our kind, perhaps by modifying humans themselves to fit their new homes instead. It's survival of the fittest, only with technology as the driving force instead of mutation and gene drift.

For me, it's the whole idea "we have to make it or nowhere at all" which is ill-concieved. It's a nature of life to fit into each ecological niche it can reach; to devour itself and dominate while it can, and then to die when it can't. Why it should be any different to humans? We'll reach to stars or die trying.

Expand full comment

But hey, there are billion$ to be made on rockets to nowhere (aka Mars) so who am I to stand in the way of that?

Expand full comment

The people who are most likely to survive as individuals or small groups (tribes?) on a planet like Mars are also likely to be the most aggressive in battling for scarce resources.

Expand full comment

How was google's acquired data NOT ever going to be used for military and surveillance?

It's probably been used that way for years already.

Expand full comment

I wrote a poem to celebrate the new year that likened 2024 to that of the biblical ”glass darkly.”

It would appear that 2025 is wasting no time in showing us what we have come “face to face” with. And, boy! Is it ugly.

Expand full comment

Don’t be evil. Be AI-vil

Expand full comment

Top management aren't any less bumbling fools as the rest of us are.

Expand full comment

"This is absolutely not what [Gary] wanted AI to become."

This is how the world always worked.

AI is software. One invests wisely and patiently, and waits.

There won't be quick results. But smart companies will do well. In a decade.

Expand full comment

So sad Gary. I've often said that if Google, Microsoft and Meta go to the dark side we are so screwed. Never really trusted they would not... so sad. I'm still holding out for Microsoft to do the right thing.

Expand full comment

Microsoft doing the right thing? Really? Have you heard of Microsoft's HoloLens being awarded billions by the military?

There's no difference between regular corporations and military contractors. They will pay you for services and you will deliver.

Which is how it should be.

Expand full comment
Comment deleted
Feb 5
Comment deleted
Expand full comment

Exactly. See the last line in my comment above.

Expand full comment

Excellent article

Expand full comment