43 Comments

Please slow down. In the beginning of this article there are many mistakes in word spellings and words missing. It is best to proof your work before posting. Just trying to help.

Expand full comment

But we do know what's coming. AI will evolve in to yet another existential threat scale technology. And by the time we understand that, it will be too late to turn back.

ANSWER: The marriage between violent men and an accelerating knowledge explosion is unsustainable.

This one sentence is really all we need to learn to know what's coming.

Nobody can predict the exact how, when, where and why of coming technology fueled disasters. But if we zoom out from particular details to the larger picture, it's not that hard to see how giving violent men ever greater powers at an ever accelerating pace is going to turn out in the end.

Technically we are racing forward with impressive speed. But philosophically, in our relationship with all these emerging technologies, we are still stuck in the 19th century. We're clinging to a "more is better" relationship with knowledge that was entirely rational in the long era of knowledge scarcity, and cluelessly ignoring that we no longer live in that era.

Today we live not in the long old era of knowledge scarcity, but in a revolutionary new very different era characterized by knowledge exploding in every direction at an ever accelerating rate. We're refusing to adapt to the new environment we have created. And like any other species in any other time and place, the price tag for failing to adapt to changing conditions is death.

The AI "experts" everyone is worshipping today have good intentions, just as those working on the Manhattan Project had good intentions. But as the history of nukes should have taught us 60 years ago, good intentions are not enough. Just as was true in 1945, the well intentioned AI "experts" are opening a pandora's box that they won't know how to close once the price tag for AI becomes clear.

The marriage between violent men and an accelerating knowledge explosion is unsustainable.

Know that, and you'll know what's coming.

https://www.tannytalk.com/p/our-relationship-with-knowledge

Expand full comment

I'll bet people as far back as the Renaissance felt just as you do about having emerged from the long darkness of ignorance and living in the glorious age of enlightenment. Should we have said "Enough!" in 1650?

Expand full comment

They didn't have thousands of massive hydrogen bombs in 1650. Knowledge wasn't exploding in every direction at an ever accelerating pace in 1650.

And I'm not saying "Enough!" I'm saying it's time to learn some new things so as to adapt to a revolutionary new environment. Which the group consensus your comment represents refuses to do.

I'm not the Luddite. That would be the group consensus which is clinging to a 19th century relationship with knowledge.

Expand full comment

You're taking a present-day perspective. I'm saying that, in their time, people probably felt much the same as you do now. Knowledge certainly was expanding, and at a pace which *at the time* must have seemed startling. At every point along an exponential curve the present looks steep and the past looks flat. One might call your position "present exceptionalism"

Expand full comment

I think it is different now. If there ever was an exponential curve then it is itself now turning exponential. The problem is that, historically we could keep up, but now we don't stand a chance. The AIs will start to feed on themselves in a positive feedback loop (or perhaps it's a negative one for us?). See this excellent 3rd party article on the more detailed reasoning : https://ourworldindata.org/technology-long-run

Expand full comment

That's a great article, thanks for the link. Yes to what you said, knowledge development feeds back upon itself leading to an ever accelerating pace of knowledge development. And you're right, there's comes a point where we can't keep up, where we can't adapt fast enough to the changing environment. What that point is exactly is hard to say, but the fact that there is some limit to human ability is easy to claim.

Imho, if we're to have any hope of managing this run away train, it's in matching revolutionary technological developments with revolutionary thinking. We can't keep radically transforming our cultural environment while still thinking about things the way we always have.

I'm attempting such radical thinking in the world peace section of my blog. The marriage between violent men and an accelerating knowledge explosion is unsustainable. We can keep either, but not both, so one of them has to go.

Currently, I see no evidence that we're ready for such radical thinking. But, one benefit of technologies of large scale is that they contain the potential to dramatically change the status quo pretty quickly. As example, imagine the war in Ukraine were to go nuclear. That would be far more persuasive than anything anybody can say, and all of a sudden a world without men might start sounding pretty reasonable.

One question I see is, will such educational real world events arise in a small enough dose that we have the opportunity to learn from them? If yes, then there is still hope. If no, then, well, we could always start talking about near death experiences.

Expand full comment

I'm not sure I agree with a few despots with itchy fingers near big red buttons justifying a policy to actively eliminate Y chromosomes. There are billions of them inside reasonable, peaceful humans. It might instead occur through a slower/artificial evolutionary process, through deliberate preference and/or evolutionary selection. But, all you might then need is another war to change people's minds (if national sovereignty is still a concept). It's 99.7% men on the Ukrainian front lines, protecting their families and homes.

Expand full comment

Your second sentence completely throws me. Could you try that thought again in different words?

Given that current AI has no agency and no volition, what change are you anticipating that will enable AI to "feed on [it]self"? Or do you just mean people will (continue to) use AI to develop more powerful AI?

Expand full comment

Mathematically, I meant multiplying the exponents of any two variables involved in the exponentiation. So raising an exponent with another exponent. Literally, I suppose I just meant 'things will get very messy' 😛.

When I wrote about 'AIs feeding on themselves', I meant that chatbots will ingest as training data more of their own or other chatbots' content, rather than human-generated content. This again will lead to an exponential rise in content and potentially large diversions from acknowledged truth. Literally, the bots will increasingly believe in their own shit. We already know the consequences of deluded bubbles of echo-chambered thinking. But it will be much harder to convince (or turn off) a distributed bot about its erroneous assumptions.

Expand full comment

Well yes, people have been predicting the end of the world since the time of Christ. What's different today is that now we have the tools to make it happen, and no longer need divine intervention to get the job done.

I may misunderstand you, but I sense you're trying to make the point that we've been here before and nothing bad happen. If that's what you mean, we could certainly enjoy a debate on such a claim.

Expand full comment

Well, we've obviously never been *exactly* here before -- history does not, in fact, repeat. The strongest reasonable claim one could make is that we've been in an analogous situation (or, perhaps, that we're *always* in an analogous situation, namely a point along an exponential growth curve). As for things likely to lead to the downfall of man, I feel that the internet and its ability to weave together formerly-isolated crackpots and misanthropes into "communities" was, and is, a bigger threat than anything I see near-term from AI

Expand full comment

Well, if you're going to attack crackpots, I'm going to take that personally. :-)

Seriously, if conventional thinking generally considered "normal", "reasonable" and "realistic" etc could solve a problem, that problem would likely already be solved. Thus, the most promising territory for exploration may be those ideas considered to be crackpot.

I'm not all that worried about current AI either. It's where it's likely to go from here that is more my concern.

Expand full comment

Except that AI is jet fuel for crackpots. Specifically, a) the chat bots' as source for misinformation will further undermine trust, and b) the next Bing Sydney in Waluigi-mode could easily spawn any number of QAnon-type cults.

Expand full comment

Many of these critiques might have been applied to the printing press, typewriters or paper: they allow humans to create problematic information and do things that are *already* against the law. (if they weren't against the law, then its questionable to complain that using AI makes them somehow more problematic).

Its unclear what your silly examples add to any attempt at pragmatic discussion other than being clickbait examples to try to get reader. They masquerade as if they were adding something to the debate but they are merely obvious potential examples of the class of issues that need to be addressed. They don't add anything to seriously considering the issue of if or how to address them other than seemingly being an attempt to ramp up moral panic porn.

re: the 1st example of someone asking about a paper that doesn't exist: yup, the software has glitches and it seems possible to educate all but truly blithering idiots that they should check facts. Thats a problem regardless of where they get information from. If anything this was a case where a fact was being checked, merely inefficiently and wasting a profs time.

In the real world information from any source can have glitches. If anything perhaps a higher level of glitches will teach people to be careful to evaluate information from multiple sources.

2nd example: yes, software can be used to scam people as its been able to for decades. Its against the law already: but I guess you wish to make it doubly against the law as if that'll help? Again, its useful to teach people to be careful with their credit card information. We can't child-proof/idiot-proof the whole world.

3rd: yes: just as a word processor can be used to create BDSM, etc. written porn. Or a printing press.

4th: again: So instead of a stranger saying "X has been in an accident and isn't conscious to talk", this made it slightly easier to dupe someone. Yup, people can be scammed, this made it a bit easier. Its still against the law already.

Often in the real world its difficult to judge the credibility of information, for instance a professor that doesn't bother to learn about the academic work regarding topics he comments on like regarding public choice theory or regulatory capture. Its what leads other professors to have a hard time taking a simplistic poorly reasoned argument from a poorly informed source seriously.

re: "Lately I have been asked to participate in a bunch of debates about whether LLMs will, on balance, be net positive or net negative. "

The same might be said of humans. Humans can create problematic content also, with or without tools. Puritans and religious zealots have been concerned about people being able to create pornography or print problematic ideas since the creation of writing, and then again when the printing press arose. Unfortunately some authoritarians tend to be concerned that they can't control each and every action of humans to ensure they do nothing wrong. Others resist that temptation, but see an excuse to give in to their desire to control others when some new tech comes along.

Expand full comment

"all but truly blithering idiots" -- just how many people are you imagining that excludes?

"perhaps a higher level of glitches will teach people to be careful to evaluate information from multiple sources" -- dream on! That's the second time your solution is to "teach people" something. Any solution that relies on large-scale teaching of the general public is doomed.

"its useful to teach people" -- that's three

``re: "Lately I have been asked to participate in a bunch of debates about whether LLMs will, on balance, be net positive or net negative. "'' -- these aren't "debates", they're (mostly baseless) speculation fests; there's no possible way people could conduct an informed (actual) debate on this topic today

Expand full comment

I predict that, before we answer the "Are LLMs a net-positive?" question, we will have stopped calling them LLMs. The AIs we ask this question of will only use LLM technology as their language module.

Expand full comment

There are historical parallels here with how nuclear energy and weapons were introduced into the world and then established an uneasy status quo. Again, it's people who press the buttons or, for LLMs, type the keys. But the latter - not 'ladder', as you wrote above! (although it could be?) - diverges when you think of how the content which ChatGPT uses is solely derived from human thinking and typing (currently). So the unceasing creation of negatively focused information about LLM development (a.k.a 'news' - of which this post plays a part) will only drive us into more FUD. (That's fear, uncertainty and doubt for any Rumsfeld fans requiring an explanation.) If we instead focused more on publicising positive aspects of the world and less FUD - no matter its source and veracity - than the LLMs you're so scared of might just more easily disappear under society's bed. We can then live more happier, productive lives, with AI tools as adjuncts instead of fearsome overlords. Yes, there are going to be bad actors, like there are still dictators with nuclear weapons, but the 1983 Cold War era film 'Wargames', which combines nuclear Armageddon with a human programmed AI has an instructive ending which highlights my argument.

Expand full comment

A key element which doesn't receive enough focus is the issue of scale. As the powers available to us grow in scale, the room for error shrinks.

With any technology there will typically be both benefits and risks.

As example, nuclear weapons have the benefit of sobering the great powers and helping to limit direct conflict between them. So we haven't yet seen a repeat of WWI and WWII, which is a very real benefit. But because of the vast scale of these weapons a single miscalculation, or even a simple unintended mistake, has the potential to crash the modern world thus erasing most of the benefits we derive from many other technologies. As the scale of a technology grows, the room for error shrinks.

Or take genetic engineering. This technology will bring many benefits in a wide range of industries, and will almost certainly deliver a number of medical miracles. But as we make genetic engineering ever easier, ever cheaper, ever more powerful, and ever more accessible to ever more people, we run a very real risk that somebody somewhere will, with intent or by mistake, create new life forms which seriously disrupt the natural environment we depend on. As the scale of a technology grows, the room for error shrinks.

The scale of future AI technology seems almost limitless. Certainly AI too will deliver many benefits. Nobody really knows how far this technology can advance, but it seems safe to predict this will be another technology of vast scale in some manner. And so the same formula applies. As the scale of a technology grows, the room for error shrinks.

If we insist on shrinking the room for error on as many fronts as possible, as fast as we possibly can, what is the most likely outcome of such a process?

Dear readers, be warned. Understanding the above has the potential to radically edit your relationship with experts and cultural leaders on many fronts.

Expand full comment

Good points, if somewhat askew from the main thrust of my argument. In brief response:

1) Scale has several dimensions, only one of which is access to a particular technology.

2) The probability of an error isn't the only factor - the damage a particular error causes is of course another one.

3) There's also the timescale over which damage is caused and/or known to be apparent. Non-stick frying pans, CFCs, CRISPR, sulphate aerosols, etc, etc. (See: Chaos/complexity theory/law of unintended consequences...)

But scale does allow equivalent damage to be caused through millions of small errors (but, hey, evolution...), equivalent to a single individual making one decision. Whether it's an error or not (or they're deluded), is wholly dependent on an observer's point of view - politically, financially, socially (e.g. 1) from the Enola Gay's cockpit vs. stood at Hiroshima's Ground Zero, 2) Rare earths child miner vs smartphone user...).

Good discussion, thank you.

Expand full comment

Thank you too Johnathan, and apologies for my rhetorical wandering.

I like your point regarding "wholly dependent on an observer's point of view". That's so true. While we normally assume that a collapse of modern human civilization would be a historic tragedy, to most of the species on the planet it might be viewed as a much welcome blessing.

And then we have the example of the collapse of the Roman Empire, which did lead to a thousand years of darkness, but something quite remarkable did emerge from the ashes. Whether we have AI or not, this pattern is likely to repeat itself many times.

And then there is the question of whether life is really better than death, as we almost always assume without questioning. There is actually no proof to back up this assumption, so a nuclear war or some other techno-tragedy might involve the glorious event of a couple billion souls traveling up the tunnel in to the light, so to speak.

Finally, I don't really get why I'm concerned about any of this given that I'm 71 and have a "get out of jail free card". Well, ok, ok, so I do get it. I just like typing way way too much. :-)

Expand full comment

I'm not *so* far behind. Yes, you never know about that second coming business. After all, every green(red?)-eyed American capitalist is clutching a wad of printed cotton we've all decided has value, yet printed on each are the words 'In God We Trust'. Talk about backing both horses and rejoicing in imperfect information... :-)

Expand full comment

I wouldn't use the Enola Gay's cockpit as reference; I'd use any of the thousands of Allied soldiers preparing to invade mainland Japan and be slaughtered in the process, or perhaps (though the didn't realize it at the time) any of the millions of Japanese civilians who would have been slaughtered in an all-out attack on the mainland using conventional weaponry

Expand full comment

I'm aware of the ethical rationale behind the use of atomic bombs in WW2's Pacific Theatre. But what you say also demonstrates my point that it's a matter of perspective. For example, a WW2 European leader would care less about knowing if Japan also had an atomic bomb, than the residents of Hawaii or even perhaps California. Mutual assurance only went global later. Even now, billionaire preppers prefer New Zealand.

Expand full comment

I was just suggesting that those might be more sharply contrasting perspectives, that's all

Expand full comment

The voice call mimic is a big concern, because where has a voice been heard in the first place, in order to sound like a family member? On the other hand, I'm a bit over the number of people who fall for entering or handing over banking or identity details to scammers. Surely everyone is aware of just about every method by now. I really feel for the people who fall for these quite rudimentary scams, it's sad.

Expand full comment

All the more reason not to dawdle on the quest for super-intelligence and give the bad actors more time to marshal their forces. Ultimately we have no choice but to trust that machines too will become enlightened. The electro-chemical signals of neurons propagate at less than 500 feet per second, whereas optical or electrical signals propagate at a sizable fraction of the speed of light -- more than one-billion times faster. Ultimately humans will seem to operate on geologic time-scales to machines.

Expand full comment

The burden of proof/responsibility is on us to separate fact from “authoritative bulls**t” but it gets harder/impossible as in the case of voiceprint auth used in finance. When even MSFT mgmt uses “we need AI regulation” as a crutch when confronted by ChatGPT failures, caveat emptor. Congressional action always lags and is in reaction to major fails IRL. HT Cory Doctorow for his blog: https://pluralistic.net/2023/03/09/autocomplete-worshippers/

Expand full comment

People had dire predictions for Stable Diffusion, the now famous open-source image generation model that I and others can run on our own hardware. That its ability to generate photorealistic images would be used for fake news and disinformation. And while there were real negative consequences to SD, particularly in the deepfake porn category, the feared flood of fake news articles backed up with AI-generated photos simply has not happened. (As far as I know, feel free to share links if I'm missing something.)

My question, then, is what makes language generation models a greater threat than image generation models? That's not to deny the other kinds of damage such AIs can do, and that our society needs to account for one way or another. The infamous emotional rollercoaster of Replika and other "AI waifus", enhanced scams, and people trusting the AI in situation where it is hallucinating. I'm just skeptical about the disinformation angle.

Expand full comment

Well, text is sort of fundamentally different from images in that people assume a text string makes a relatively unambiguous statement about the world, whereas most people don't assume that about a picture. Consequently, there are innumerable human and automated systems that consume text and do things with it, but relatively few such things for images

Expand full comment

Agree with the anti-AI AI—it’s the analog to how oral polio vaccines work.

As for voice calls, it seems like it would take only a question or two to establish that the voice isn’t who you think it is. Might feel awkward to ask under stress, but so is losing a lot of money.

Turns out a guy named Mark Rober and his merry band have taken some of the fight to the enemy:

https://www.youtube.com/watch?v=xsLJZyih3Ac

It’s worth watching. As are all of his videos.

Expand full comment

We got to use AI to fight against all these scammers

Expand full comment

But then you're right in the same boat with the anti-spammers, playing an endless chess match against a huge pool of opponents

Expand full comment

These are just entry-level tricks, but still work well enough to deceive. Outside of these things, black hats have been using LLMs to generate malicious code for attacks since chatGPT came out.

Expand full comment