43 Comments

Please slow down. In the beginning of this article there are many mistakes in word spellings and words missing. It is best to proof your work before posting. Just trying to help.

Expand full comment

But we do know what's coming. AI will evolve in to yet another existential threat scale technology. And by the time we understand that, it will be too late to turn back.

ANSWER: The marriage between violent men and an accelerating knowledge explosion is unsustainable.

This one sentence is really all we need to learn to know what's coming.

Nobody can predict the exact how, when, where and why of coming technology fueled disasters. But if we zoom out from particular details to the larger picture, it's not that hard to see how giving violent men ever greater powers at an ever accelerating pace is going to turn out in the end.

Technically we are racing forward with impressive speed. But philosophically, in our relationship with all these emerging technologies, we are still stuck in the 19th century. We're clinging to a "more is better" relationship with knowledge that was entirely rational in the long era of knowledge scarcity, and cluelessly ignoring that we no longer live in that era.

Today we live not in the long old era of knowledge scarcity, but in a revolutionary new very different era characterized by knowledge exploding in every direction at an ever accelerating rate. We're refusing to adapt to the new environment we have created. And like any other species in any other time and place, the price tag for failing to adapt to changing conditions is death.

The AI "experts" everyone is worshipping today have good intentions, just as those working on the Manhattan Project had good intentions. But as the history of nukes should have taught us 60 years ago, good intentions are not enough. Just as was true in 1945, the well intentioned AI "experts" are opening a pandora's box that they won't know how to close once the price tag for AI becomes clear.

The marriage between violent men and an accelerating knowledge explosion is unsustainable.

Know that, and you'll know what's coming.

https://www.tannytalk.com/p/our-relationship-with-knowledge

Expand full comment

Many of these critiques might have been applied to the printing press, typewriters or paper: they allow humans to create problematic information and do things that are *already* against the law. (if they weren't against the law, then its questionable to complain that using AI makes them somehow more problematic).

Its unclear what your silly examples add to any attempt at pragmatic discussion other than being clickbait examples to try to get reader. They masquerade as if they were adding something to the debate but they are merely obvious potential examples of the class of issues that need to be addressed. They don't add anything to seriously considering the issue of if or how to address them other than seemingly being an attempt to ramp up moral panic porn.

re: the 1st example of someone asking about a paper that doesn't exist: yup, the software has glitches and it seems possible to educate all but truly blithering idiots that they should check facts. Thats a problem regardless of where they get information from. If anything this was a case where a fact was being checked, merely inefficiently and wasting a profs time.

In the real world information from any source can have glitches. If anything perhaps a higher level of glitches will teach people to be careful to evaluate information from multiple sources.

2nd example: yes, software can be used to scam people as its been able to for decades. Its against the law already: but I guess you wish to make it doubly against the law as if that'll help? Again, its useful to teach people to be careful with their credit card information. We can't child-proof/idiot-proof the whole world.

3rd: yes: just as a word processor can be used to create BDSM, etc. written porn. Or a printing press.

4th: again: So instead of a stranger saying "X has been in an accident and isn't conscious to talk", this made it slightly easier to dupe someone. Yup, people can be scammed, this made it a bit easier. Its still against the law already.

Often in the real world its difficult to judge the credibility of information, for instance a professor that doesn't bother to learn about the academic work regarding topics he comments on like regarding public choice theory or regulatory capture. Its what leads other professors to have a hard time taking a simplistic poorly reasoned argument from a poorly informed source seriously.

re: "Lately I have been asked to participate in a bunch of debates about whether LLMs will, on balance, be net positive or net negative. "

The same might be said of humans. Humans can create problematic content also, with or without tools. Puritans and religious zealots have been concerned about people being able to create pornography or print problematic ideas since the creation of writing, and then again when the printing press arose. Unfortunately some authoritarians tend to be concerned that they can't control each and every action of humans to ensure they do nothing wrong. Others resist that temptation, but see an excuse to give in to their desire to control others when some new tech comes along.

Expand full comment

I predict that, before we answer the "Are LLMs a net-positive?" question, we will have stopped calling them LLMs. The AIs we ask this question of will only use LLM technology as their language module.

Expand full comment

There are historical parallels here with how nuclear energy and weapons were introduced into the world and then established an uneasy status quo. Again, it's people who press the buttons or, for LLMs, type the keys. But the latter - not 'ladder', as you wrote above! (although it could be?) - diverges when you think of how the content which ChatGPT uses is solely derived from human thinking and typing (currently). So the unceasing creation of negatively focused information about LLM development (a.k.a 'news' - of which this post plays a part) will only drive us into more FUD. (That's fear, uncertainty and doubt for any Rumsfeld fans requiring an explanation.) If we instead focused more on publicising positive aspects of the world and less FUD - no matter its source and veracity - than the LLMs you're so scared of might just more easily disappear under society's bed. We can then live more happier, productive lives, with AI tools as adjuncts instead of fearsome overlords. Yes, there are going to be bad actors, like there are still dictators with nuclear weapons, but the 1983 Cold War era film 'Wargames', which combines nuclear Armageddon with a human programmed AI has an instructive ending which highlights my argument.

Expand full comment

The voice call mimic is a big concern, because where has a voice been heard in the first place, in order to sound like a family member? On the other hand, I'm a bit over the number of people who fall for entering or handing over banking or identity details to scammers. Surely everyone is aware of just about every method by now. I really feel for the people who fall for these quite rudimentary scams, it's sad.

Expand full comment

All the more reason not to dawdle on the quest for super-intelligence and give the bad actors more time to marshal their forces. Ultimately we have no choice but to trust that machines too will become enlightened. The electro-chemical signals of neurons propagate at less than 500 feet per second, whereas optical or electrical signals propagate at a sizable fraction of the speed of light -- more than one-billion times faster. Ultimately humans will seem to operate on geologic time-scales to machines.

Expand full comment

The burden of proof/responsibility is on us to separate fact from “authoritative bulls**t” but it gets harder/impossible as in the case of voiceprint auth used in finance. When even MSFT mgmt uses “we need AI regulation” as a crutch when confronted by ChatGPT failures, caveat emptor. Congressional action always lags and is in reaction to major fails IRL. HT Cory Doctorow for his blog: https://pluralistic.net/2023/03/09/autocomplete-worshippers/

Expand full comment

People had dire predictions for Stable Diffusion, the now famous open-source image generation model that I and others can run on our own hardware. That its ability to generate photorealistic images would be used for fake news and disinformation. And while there were real negative consequences to SD, particularly in the deepfake porn category, the feared flood of fake news articles backed up with AI-generated photos simply has not happened. (As far as I know, feel free to share links if I'm missing something.)

My question, then, is what makes language generation models a greater threat than image generation models? That's not to deny the other kinds of damage such AIs can do, and that our society needs to account for one way or another. The infamous emotional rollercoaster of Replika and other "AI waifus", enhanced scams, and people trusting the AI in situation where it is hallucinating. I'm just skeptical about the disinformation angle.

Expand full comment

Agree with the anti-AI AI—it’s the analog to how oral polio vaccines work.

As for voice calls, it seems like it would take only a question or two to establish that the voice isn’t who you think it is. Might feel awkward to ask under stress, but so is losing a lot of money.

Turns out a guy named Mark Rober and his merry band have taken some of the fight to the enemy:

https://www.youtube.com/watch?v=xsLJZyih3Ac

It’s worth watching. As are all of his videos.

Expand full comment

We got to use AI to fight against all these scammers

Expand full comment

These are just entry-level tricks, but still work well enough to deceive. Outside of these things, black hats have been using LLMs to generate malicious code for attacks since chatGPT came out.

Expand full comment