Discussion about this post

User's avatar
Phil Tanny's avatar

But we do know what's coming. AI will evolve in to yet another existential threat scale technology. And by the time we understand that, it will be too late to turn back.

ANSWER: The marriage between violent men and an accelerating knowledge explosion is unsustainable.

This one sentence is really all we need to learn to know what's coming.

Nobody can predict the exact how, when, where and why of coming technology fueled disasters. But if we zoom out from particular details to the larger picture, it's not that hard to see how giving violent men ever greater powers at an ever accelerating pace is going to turn out in the end.

Technically we are racing forward with impressive speed. But philosophically, in our relationship with all these emerging technologies, we are still stuck in the 19th century. We're clinging to a "more is better" relationship with knowledge that was entirely rational in the long era of knowledge scarcity, and cluelessly ignoring that we no longer live in that era.

Today we live not in the long old era of knowledge scarcity, but in a revolutionary new very different era characterized by knowledge exploding in every direction at an ever accelerating rate. We're refusing to adapt to the new environment we have created. And like any other species in any other time and place, the price tag for failing to adapt to changing conditions is death.

The AI "experts" everyone is worshipping today have good intentions, just as those working on the Manhattan Project had good intentions. But as the history of nukes should have taught us 60 years ago, good intentions are not enough. Just as was true in 1945, the well intentioned AI "experts" are opening a pandora's box that they won't know how to close once the price tag for AI becomes clear.

The marriage between violent men and an accelerating knowledge explosion is unsustainable.

Know that, and you'll know what's coming.

https://www.tannytalk.com/p/our-relationship-with-knowledge

Expand full comment
W. James's avatar

Many of these critiques might have been applied to the printing press, typewriters or paper: they allow humans to create problematic information and do things that are *already* against the law. (if they weren't against the law, then its questionable to complain that using AI makes them somehow more problematic).

Its unclear what your silly examples add to any attempt at pragmatic discussion other than being clickbait examples to try to get reader. They masquerade as if they were adding something to the debate but they are merely obvious potential examples of the class of issues that need to be addressed. They don't add anything to seriously considering the issue of if or how to address them other than seemingly being an attempt to ramp up moral panic porn.

re: the 1st example of someone asking about a paper that doesn't exist: yup, the software has glitches and it seems possible to educate all but truly blithering idiots that they should check facts. Thats a problem regardless of where they get information from. If anything this was a case where a fact was being checked, merely inefficiently and wasting a profs time.

In the real world information from any source can have glitches. If anything perhaps a higher level of glitches will teach people to be careful to evaluate information from multiple sources.

2nd example: yes, software can be used to scam people as its been able to for decades. Its against the law already: but I guess you wish to make it doubly against the law as if that'll help? Again, its useful to teach people to be careful with their credit card information. We can't child-proof/idiot-proof the whole world.

3rd: yes: just as a word processor can be used to create BDSM, etc. written porn. Or a printing press.

4th: again: So instead of a stranger saying "X has been in an accident and isn't conscious to talk", this made it slightly easier to dupe someone. Yup, people can be scammed, this made it a bit easier. Its still against the law already.

Often in the real world its difficult to judge the credibility of information, for instance a professor that doesn't bother to learn about the academic work regarding topics he comments on like regarding public choice theory or regulatory capture. Its what leads other professors to have a hard time taking a simplistic poorly reasoned argument from a poorly informed source seriously.

re: "Lately I have been asked to participate in a bunch of debates about whether LLMs will, on balance, be net positive or net negative. "

The same might be said of humans. Humans can create problematic content also, with or without tools. Puritans and religious zealots have been concerned about people being able to create pornography or print problematic ideas since the creation of writing, and then again when the printing press arose. Unfortunately some authoritarians tend to be concerned that they can't control each and every action of humans to ensure they do nothing wrong. Others resist that temptation, but see an excuse to give in to their desire to control others when some new tech comes along.

Expand full comment
40 more comments...

No posts