58 Comments

LLMs are actually *Large Word Sequence Models*. They excel at reproducing sentences that sound correct. Mostly because they have been trained on billions of small groups of word sequences.

However language exists to transfer meaning between humans. Calling Chatbot a LLM implies it conveys meaning. Any meaning and correctness behind these generated word sequences is purely incidental and any potential meaning is inferred solely by the reader.

Saying that, Chatbot is ground-breaking technology, it will help the non-English speaking with syntax and grammar. But it will help no-one with conveying meaning.

When the next generation looks back in 15 yrs and sees the $Ts poured into LLMs and non-symbolic algorithms they will be stunned at how short-sighted and misguided we currently are.

Expand full comment

So here's the question, then: What allows these models to (sometimes) do things that (appear to) go beyond mere "word sequencing?" Obviously LLMs have an advantage, in that humans will read meaning into the text that isn't actually there, and auto-correct whatever part of the argument doesn't add up; I often find that on re-reading an AI-generated essay, I initially made its contents more plausible because I wasn't reading carefully. But still, is expanding a list of bullet points into a (mostly) coherent essay really just "word sequencing?" What about transforming a three-paragraph essay into a (mediocre) rap, like a friend of mine did while toying with chatGPT? Maybe clever statistics really can get us quite close to something that looks like intelligence, when presented with enough data. It's just hard to know exactly what's going on.

Expand full comment

Well said, again. The level of BS we will have to endure because of the fact that these 'word order prediction systems' can produce 'correct nonsense' is really mind boggling and not many are aware of the scale of the problem. So, good that it is pointed out.

With respect to: what should we do about it: I would humbly suggest people to listen to the last 7 minutes of my 2021 talk: https://www.youtube.com/watch?v=9_Rk-DZCVKE&t=1829s (links to last 7 minutes) it discusses the fundamental vulnerability of human intelligence/convictions and the protection of truthfulness as a key challenge of the IT revolution.

Also in that segment: one thing we might do at minimum is establish a sort of 'Hippocratic Oath for IT'. And criminalising systems pretending to be human.

There is more and those were first thoughts (though before 2000 I've already argued that internet anonymity when 'publishing' will probably not survive the fact that it enables damage to society too much)

Final quote from that 7 minute segment at the end of the talk:

"It is particularly ironic is [sic] that a technology — IT — that is based on working with the values 'true' and 'false' (logic) has consequences that undermine proper working of the concepts of 'true' and 'false' in the real world."

Expand full comment

The really depressing thing is that there's probably zero chance of either government regulation or industry self-regulation. A lot of the most dangerous AI research presents itself as "science," and thus any regulation will be taken as "anti-science." The fact that this so-called science is for-profit and frequently opaque will continue to be conveniently ignored. More importantly, a lot of astronomically rich people have poured enormous amounts of money into toxic AI, and they will use their money to defend their investments. Google, Facebook, etc. aren't just going to shrug their shoulders and say "actually, you're right, this stuff we've poured gajillions of dollars into is actually dangerous to civilization, we'll stop."

Expand full comment

"It is particularly ironic is [sic] that a technology — IT — that is based on working with the values 'true' and 'false' (logic) has consequences that undermine proper working of the concepts of 'true' and 'false' in the real world." It would be better to call them 0 and 1, they are a stripped down version of logic, and if you add in the directed nature of IT operations, it has very little to do with logic in the real world - temporal logic, existential logic, propositional logic, modus tollens. However, if you introduce these concepts into IT, together with the relations they control, then the concepts of true and false in the real world won't be undermined, but strengthened, because we are not capable of thinking about logic in the large.

Expand full comment

It was a bit tongue in cheek, of course. The words 'true' and 'false' in the quote were between quotes for a reason :-)

The underlying technology executes classical logic on true/false values (all the standard truth tables, Boolean algebra, it doesn't matter if they are labeled 0/1 or F/T, same logic/algebra). Digital computers as machines cannot do _anything_ but these core truth tables.

I agree that it has little to do with 'logics' in the real world (which in most cases are more quantum-logic like I think). And digital computers cannot handle that, only for small logical questions and very inefficiently. It doesn't really scale well. So "if you introduce these concepts into IT, together with the relations they control, then the concepts of true and false in the real world won't be undermined, but strengthened" is a modus pollens where we must wonder if the antecedent can be true in practice and at scale. Attempts to do this were the mainstay of symbolic AI (though in that time, the limitations of the classical boolean logic you rightly point out were often simply ignored).

Expand full comment

https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you%27re_a_dog was cute in 1993. We naively thought that anonymity was a good thing. But it has become a weapon for nations states, trolls, and mobs. The Federalist Papers were published anonymously. But now publishing is free and dissemination easy. The release of The Twitter Files exposes the very real dangers of Government interference in (online) speech.

Expand full comment

"The release of The Twitter Files exposes the very real dangers of Government interference in (online) speech."

Complete drivel. all the so-called "Twitter files" showed was employees of a private company deciding whether to allow unverified claims and intimate images to appear on their social media site.

Expand full comment

The Twitter Files are part of Musk's right wing gaslighting information. Did you actually read Taibbi's thread? There was zero "government interference" in "free speech" (which Musk is now grossly interfering with, with his suspensions of journalists and others and his banning of references to other platforms).

Expand full comment

Harry G. Frankfurt famously defined "bullshit" as speech intended to persuade without regard to the truth. This is different from lies, which are deliberate departures from truth, and thus require the liar to refer to truth in some way, as part of their action. The liar knows the truth and conceals it. The bullshitter doesn't make any attempt to ascertain true or false.

By this definition, any and all products of large language models are bullshit. Regardless of their use or the intentions of their users.

Wondering whether Frankfurt's definition of the B.S. problem might be a better way to define the trouble with LLMs -- rather than truth v. lies. The latter opens you to replies along the lines of "but it's often right!" or "ethical people will use it ethically!"

Expand full comment

Exactly (& see my 2020 tech review GPT-3 Bloviator, which I wanted to called GPT-3, Bullshit Artist)

Expand full comment

Great piece Gary, thanks for sharing. The challenge of finding ways to integrate large language models with "reasoning" is a significant barrier that requires entirely new concepts and approaches. This challenge is made much more complex since there is little understanding or agreement on what reasoning is. And there is little meaningful scalable success in defining how computers and software could reason, or at least safely emulate reasoning. But the need for good reasoning, by both humans and computers systems, and then how they could collaborate using reasoning, is essential and the urgency for this has just been profoundly escalated.

Expand full comment

It seems unlikely human beings are qualified to teach reasoning to computers.

As evidence, consider our relationship with nuclear weapons. A single human being can destroy modern civilization in minutes, an ever present existential threat we typically find too boring to discuss, even in presidential campaigns when we are selecting one individual to have sole authority over the use of these weapons. Evidence of reasoning ability?

When considering the future of AI we can learn a lot from our 75 years of experience with nuclear weapons. That well documented history should be teaching us that we simply aren't ready for more existential scale technologies at this time.

Expand full comment

OMG Gary, I love your 4-point proposal! Indeed, good old search (eg for 'kitten' or 'Picasso') is in danger of getting polluted if the search engine indexes newer generative AI creations. Any 'news' item [eg about the discovery of a new species of butterfly, complete with pretty pics] is (should be) automatically sus - even if it is "actually" true! That's a sad world to live in.

Every AI generator to date, including the ones that create pretty artwork and output chat, is computing and outputting symbols, that's all it does. Those symbols do not, CANNOT be grounded in reality - because the generators have no way to relate their input symbols to anything they 'know', which in turn is because they have no first-hand experience.

We have ELIZA++ that outputs grammatically correct text in much longer sequences than Weizenbaum's original program, but that's about it.

****

Me: How can someone without a body interact with you?

ChatGPT: As a machine learning model, I exist solely as a collection of algorithms and data, and do not have a physical form or presence. Because of this, I cannot be directly interacted with in the same way that a physical object or a human with a physical body could be. Instead, I am designed to process and analyze text-based inputs in order to generate responses and provide information on a wide range of topics. Someone without a body could interact with me by providing text-based input, such as a written or spoken question, and I would use the algorithms and data that I have been trained on to generate a response. This response could then be read or heard by the person without a body, allowing for a form of interaction even without a physical body.

****

Like you point out, this is dangerous, more than it is amusing.

Expand full comment

ChatGPT is like a theist--they too imagine an entity thinking, seeing, creating, etc. without brain or other physical components.

Expand full comment

No clue what you mean, but oh well. ChatGPT computes dot products and sigmoids etc, to output words and pixels one by one. Brains don't. Brains are not computers that use the body for I/O. Your redefining things to suit your purpose doesn't get you anywhere.

Expand full comment

Yes, your complete lack of comprehension is evident. And I did not redefine anything--you seem not to understand what "like" OR "define" mean.

P.S. The response is pure projection and hypocrisy. My English is perfect; this guy's comprehension is, again, absent. And his comment is a grossly intellectually dishonest personal attack of no value (I'm an officer of Mensa with an IQ of 150--his claim that I lack intelligence is a childish lashing out, a hollow insult that he knows is not true, the same with claiming that I haven't left substantive comments--dishonest nonsense when he himself has engaged with and even Liked some of those comments), just like his previous ad hominem lies like accusing me of redefining things to suit my purpose. Our substantive disagreement is about Strong AI, but he doesn't even understand the arguments for it, due to an apparent lack of abstract thinking.

Expand full comment

I thought you didn't want to continue to engage?

"complete lack of comprehension' - there's a joke. If you wrote proper English, people would actually understand - what a concept!

Also, making personal attacks makes you look even more pathetic and illiterate. If you have anything useful to say about Gary's posts and their comments, that's a different matter - but , that would require intelligence - in its absence, feel free to keep up your drive-by troll rants in broken English...

Expand full comment

We simply need to evolve the internet culture to only trust IPFS content that is signed with a public key. These public keys are linked to DIDs in a decentralized reputation graph. All other content will have to be assumed autogenerated by bots.

Expand full comment

Thanks Gary for making helpful points that these Generative-Pretrained-Transformer AI systems, like ChatGPT, are simultaneously very fun to use and yet (1) make many mistakes, so user beware, (2) can be weaponized by bad actors, and (3) are inexpensive to use by bad actors and other users alike. My further opinion here https://service-science.info/archives/6309

Expand full comment

While it is completely true that ChatGPT generated text looks more human like and plausible, my question is "What prevents someone from intentionally or unintentionally promoting misinformation on the web?" Isn't this already a problem of the internet? Even before ChatGPT, I have read a whole lot of articles and videos which I found were not accurate. Internet content can only be trusted so much. The trust factor has just decreased to an even lower level.

The problem is not of content generated v/s content written. The problem is of the gullibility of people when they read the content. It is like believing ads on TV when some movie person promotes it, or the way the crypto market moved from highs to lows based on actions and tweets of famous personalities. People learnt not to trust anything & everything and things started stabilising.

My take, this is just the initial phase of chatGPT. I am sure people will start disregarding and start developing the required fences around what they read and trust. Who knows, the addiction to social media and internet may start subsiding which just maybe a blessing in disguise for human civilisation as a whole.

Expand full comment

People are already trying jailbreaks and can simply use davinci 3 or similar instead, to kick off the guardrails

Expand full comment

This thing is dangerous. I propose a moratorium- except for carefully controlled research - until strong regulation is in place

Expand full comment

Apologies, but governance mechanisms are a fantasy being sold to us by those who are poised to get rich (and richer) from the development of such technologies. As evidence, consider this map which shows the countries in the world living under dictatorships.

https://worldpopulationreview.com/country-rankings/dictatorship-countries

These countries are run by very serious ruthless people who have no interest in our values. No governance scheme cooked up at academic conferences at elite institutions in the West will have any impact upon how these governments use AI.

Globally, the concept of strong regulation is a mythology. This is useful information to have because knowing that helps us better evaluate any experts we may read. If we should come across some expert promising that some governance scheme will make AI safe, we'll know immediately that they aren't actually an expert.

Expand full comment

To be perfectly frank, I am not sure whether it is even possible to regulate LLM like ChatGPT. I am however convinced that they are dangerous in various ways. A real can of worms. If I could, I would BAN them, except that the bad guys wouldn't pay any attention to a ban.

I live in the EU where we are finalizing the AI Act which does provide some regulation for "objectives oriented" AI within the EU. THe US is working along the same lines. Not perfect but beter than nothinbg.

As it stands, however, the current version of the AI Act just kicks the LLM can down the road, telling the Commissionqs to come up with something. How I have no idea;. Any thoughts ??

Expand full comment

I think we should let it run free. I’d love for every person on earth to have access to a full and uncensored version of ChatGPT that could scour the net and other types of data and conduct robust research.

Humans can figure out misinformation vs what’s real. We all sort of know what is fake. Don’t underestimate humans.

This could be a very powerful tool for positive change if we allow everyone equal access to it.

Just like the Internet itself, of course some will abuse this power, but the overwhelming majority of people are “good” people that don’t want to harm others. We will mostly use this for good and for innovation and to build wealth.

That is my belief.

Expand full comment

While it pains me to say this, particularly to you, I feel I have to ask if you followed the 2016 presidential election here in the United States.

Yes, most people are good, agreed. But as the scale of emerging powers grows that matters less and less. As example a single person, Vladimir Putin, could destroy our civilization in just minutes if he chose to. It wouldn't matter that most people are good.

Expand full comment
Comment deleted
December 11, 2022
Comment deleted
Expand full comment

Snopes is an example of humans figuring it out. All 3rd party sites are humans. Great journalists are humans etc.

Humans still trump bots imo.

There are always people willing to dig deep and fact check it seems. So I have a lot of faith in us.

Expand full comment

The question is whether humans can keep up with the deluge of misinformation to come

Expand full comment

and the time, skills, and motivation to "figure out misinformation" in the deluge. But identifying misinformation is only the first activity, as we must then find and identify 'good' or perhaps more accurately more 'trustworthy' good information and then make sense of what you find and its application to you in context to your life and activities. Technology is essential in meeting this challenge, but it won’t come from LLMs.

Expand full comment

And the answer to your question is no. At least in the United States, this is proven by the wide popularity of Donald Trump, a rampaging human misinformation machine. Technologists in AI and genetic engineering are either blind to the reality of the human condition, or they just don't care. It's bad engineering to ignore all the factors involved in one's project, just because they are inconvenient.

Expand full comment

Also very true.

Expand full comment

And by all 3rd party sites, I mean ultimately, there are humans behind them somewhere.

I guess when bots start breeding humans, or making their own more intelligent bots without human intervention, then we will have a bigger problem. Maybe... not 100% sure.

I wrote about this on my own Substack at one point.

https://open.substack.com/pub/charlottedune/p/the-botapocalypse?r=8gb2e&utm_medium=ios&utm_campaign=post

Expand full comment

It's worth remembering what happened to machine translation in the mid-1960. MT, as it was known, was going great guns in the 1950s. Alas, many researchers were also making promises they were unable to fulfill. High-quality MT is just around the corner, things like that. By the early 1960s the federal government, which had been footing the bill, was wondering when the promised results were going to materialize. They wanted those Russian technication documents translated into English, now!

So in 1964 they appointed a blue-ribbon panel to investigate, the Automatic Language Processing Advisory Committee, known as ALPAC. (FWIW, my teacher, David Hays, was on that committee.) The committee, in effect, came up with two recommendations: 1) There is no near-term prospect of high-quality machine translation so scratch that. 2) But we now have theoretical concepts we didn't have when we'd first started, so now's the time to fund basic research.

If you don't already know, you can guess what happened. The government took the first recommendation, but ignored the second. Funding dried up. Though MT was at the time a separate enterprise from AI, and still is more or less, you can think of that as the first AI Winter. (BTW, and that's how the field came to be known as "computational linguistics." It was rebranded.)

Machine learning these days seems mostly funded by private enterprise. That's certainly true for the really large projects, the ones of "foundational" scope. Still, if there's a nasty public backlash for the reasons you suggest, Gary, I can imagine the results will be much the same as they were in the 1964s. Note that back in the 1960s very few people had even seen a computer, much less held one in their hand as an everyday tool. MT was unknown to the public beyond a quickly forgotten news story.

I find it hard to imagine that those companies would be willing to fund the kind of research you favor, as do I, in the face of withering disdain from a populace that is outraged at what they've done. They're not funding it now, why should they do any different in the face of an angry public, a public that could care less about the (esoteric) difference between pure DL and hybrid systems? Nor would the Federal government be in much of a mood to provide funding.

Expand full comment

Is there an opportunity here? What if a company established a "Validation Service"?

An author would get a "validation link" for something they wrote. If the reader found the information valuable enough, but wanted to confirm the source and check the facts - they would click on the validation link. There would be a charge, depending on the size of the article.

If enough people paid for validation - perhaps the charge would be zero. And then the article would get a validation "seal".

Perhaps the firm that the author wrote for would pay for a validation seal up front.

Articles without a validation link or seal could be ignored.

Expand full comment

That will probably help, but it won't be fool proof. Think about all the validation that auction firms do when they sell paintings from famous artists. Sometimes, it turns out that what they thought was a real da Vinci was a forgery and so on. In the future, we will not be able to tell if the photos and films from historical figures are real.

There will be thousands of different versions of the I have a dream speech online, and no one will be able to tell which one was real. Historians might be able to, at least for some of the most famous events. But even with present technology, we can create deep fakes so convincing that it is almost impopssible to tell if they are real. (When will we see a deep fake video showing Neill Armstrong privately "confessing" that the moon landing was a hoax?) How will we be able to know what actually happened in the past when there will be fake alternatives that looks just as real as the real thing?

Expand full comment

Ouch - you are so correct. Suddenly printed books look really attractive. You can change an online book at will - but not a book.

Your point about the I have a dream speech really hits home. A slight change in wording could have a very negative impact. If a group decided to "flood the zone" with a bogus copy - who can predict the results? What about the "letter from Birmingham jail" - for an audience who hadn't read it - a change would not be noticed.

An Armstrong confession would fire up conspiracy theories - unless the public understood that the images they returned could not be computer generated. Not many people do.

Maybe mixing confirmed ID with a history of correct validation has some impact. A source that is 90% correct in validation - is better than a source which is not. A "reputation" score becomes a thing of value.

One thing is sure - a method for confirming ID and holding people responsible is a first, important step. Might even convince people to stop saying nasty things online - that they would never say in person.

Thanks for your reply.

Expand full comment

And the people / companies that do the validation are validated by whom? The CVA (Central Validation Agency)? I see your point, but doubt its practicality.

Expand full comment

You are correct, currently the "media" which should be doing validation is not (except as it suits a narrative).

The only possibility I see would be to somehow employ "the wisdom of the crowd" - multiple people solving the same problem independently with a method for collected the results. Maybe not even "paid" - after all, no one pays Substack contributors.

Not the incredibly braindead "like" button - perhaps the worst idea anyone ever had for rating. (Must have been a programmer that came up with that...)

Maybe add a markup to HTML - something that separates fact from opinion. <fact>The sky is blue, sometimes.</fact>. Even without validation - you could ignore any posting/article that had no identified "facts".

For instance, the other day I saw a post that claimed a company had batteries that could power an aircraft - and be recharged in 30 minutes. Maybe the do - but fast charging typically has a negative effect on battery life - it's a problem that hasn't been solved. So, if the author flagged that as a "fact" - perhaps someone with battery expertise could challenge it. Without that flag - you could regard it as a "claim".

If you combine some method of validation with strict ID of the source - you have at least a small part of a solution.

Thanks very much for reading and replying.

Expand full comment

Is it possible to create a digital watermark that could be used to identify AI-authored material? I'd like to see a "caveat" emoji as well that warns readers against questionable stuff whether computer- or human-generated.

Expand full comment