38 Comments
Jun 28·edited Jun 28Liked by Gary Marcus

"lied like an LLM". A new one for the corpus. (And we get lies *from* LLMs as well as *about* them ... rather like politicians indeed...).

Expand full comment

This is even better than lied like a rug.

Expand full comment

Maybe Trump is actually an LLM robot?

Expand full comment

"In the future, the State will synthesize all news broadcasts ..." – I made that up (with no help from an AI).

Expand full comment

Being a Canadian and long-time admirer of the US, I watched last nite debate with dismay. Your comment is right on.

Expand full comment
Jun 28Liked by Gary Marcus

Great points! Good stuff Gary!

Expand full comment

It's hard to know where to begin. First, it is Congress that enacts legislation, and as Bill Foster (D, IL) said at the APS spring meeting several years ago the US is shamefully deficient in technical representation in our legislative organs in comparison with the EU or PRC. So having some more technically literate state and national legislators would be a good start. But how can that happen when arguably 50% of the population is downright antiscientific and the majority of the rest are woefully ignorant?

Second it is obvious from the transcript of last night's debate that Biden was coherent, honest and responsive, while Trump was none of the above. It would be helpful if the pundits would make the effort to examine the content rather than focusing on superficial appearances.

Finally-and perhaps I missed it- it's hard to blame either candidate for not mentioning AI when the topic was never broached by the moderators.

Expand full comment

But we have that intellectual giant Kamala Harris who is leading the AI initiative. What more could Americans ask for?

Expand full comment

Sorry, give me a President who understands people and is not easily conned. One who asks questions that make people uncomfortable and makes unreasonable demands like Steve Jobs.

Last night's debate certainly had it's share of mis-statements and outright lies, but pretending to understand AI and it's impact would have been the ultimate lie. Just contrast Gary Marcus and Ethan Mollick's view of LLMs if you don't agree.

And consider this quote about leadership -

“It can be easy to assume that a sign of being a great leader is having all the answers; however, often the opposite is true,” says a Forbes article on this topic, “No leader knows everything, but great leaders are always willing to learn and grow in their knowledge, and often have ways of finding the answers they need ...Nov 10, 2023

Source:

https://www.businessrecord.com/on-leadership-leading-when-you-dont-have-all-the-answers/#:~:text=%E2%80%9CIt%20can%20be%20easy%20to,finding%20the%20answers%20they%20need

Expand full comment

I too am very disappointed by the NYTimes Editorial Op-Ed asking for Biden to step down as a presidential candidate after a single debate performance. We don't have to agree with everything that the NYTimes opines. It was obvious that Biden was having a bad night. America has a way of putting their heroes on pedestals and expecting them to perform like trained seals. But America also loves celebrities who have fallen from grace and bounced back.

Expand full comment

I think an AI-savvy president would be more than we could reasonably hope for. Instead, we should hope for one that is comfortable with science and business and has the ability to pick good people and listen to them. Of course, today's issue is just getting Biden to step aside and throw the race open to replacement candidates. Virtually every Biden-friendly pundit today is asking him to drop out. It's going to be hard for Biden to ignore this.

Expand full comment

Perhaps an interesting (thought) experiment: would an LLM have produced better, more factual and/or more coherent responses to the questions than the actual candidates?

Expand full comment

That would have *sounded* smoother, and more linear, but we could not trust the veracity of them. A bit more dangerous in a way, yes?

Expand full comment
Jun 29·edited Jun 29

Indeed, fully agree there, but my point was to what extent you can rely on the veracity of the human debaters? How many mistakes would an LLM make vs these candidates on the questions asked, when not prompted to be misleading?

In some sense, what we now sometimes call hallucinations in case of LLMs, we would rather label as personal opinions, debates, or panel discussions in case of humans, where you often need to rely on the overall credibility of these individuals.

In some sense, you could call the current outputs of LLMs 'statistical opinions' of sorts.

But what is troubling in that sense, is that these 'statistical opinions' are now actually being used for seemingly low hanging fruit type of use cases, such as product Q&A chatbots, document summarizers, etc., for which however you rather not have an opinion as an answer, human or statistical.

Expand full comment

This turned into yet another mini-essay – so perhaps I’ll post it to my Substack. But here we go anyway (I tried to keep it succinct, but ideaphoria is at work!)

As far as “rely on the veracity of the human debaters” – when we are talking about politicians, I would say zero.

Overall it is more like a form of entertainment in that respect (a rather sad and bizarre one last night). Sort of like, team sports...

Trying to equate LLM hallucinations with human opinions is a bit of a stretch, as you may realize, but indeed and interesting comparison. And indeed, Illuminating the difference is what we are all about here, in a sense.

The difficulty here is that, first you have to have some clarity about what are “facts” versus “opinions” as well as, in the public, political spheres, the play of “misinformation” (the old-fashioned word is propaganda, though I suppose that was more about state-sponsored, organized campaigns, not wild internet memes), not to mention the, er, fact that most of the electorate can’t seem to tell the difference, nor are really interested: it’s simply a matter of taking on side that’s “right” versus another. It’s often like religion, where it’s more about beliefs than facts. I define a belief as an idea that is held despite evidence. And one also they need to have the ability to evaluate truths: critical thinking skills, knowledge, etc. and an interest and openness.

As far as knowledge, there are two types: relative and absolute. (Some argue that there’s no such thing as absolute knowledge, but they contradict and undermine their own position, since it obviously in that case impossible to evaluate what they say – as in “everything I say is a lie” contradicts itself) – so I don’t buy it’s all a “narrative” or a story, or relative. The self-evidential quality of logic and math, for one, have the perfume of the absolute, as well as the perception of beauty and love. Relative knowledge – which is everything about the world – is always probabilistic, never 100% certain, though it can approach that. The only absolute knowledge I can know the unchanging direct knowledge of “I am conscious” and “I exist” (and the two are inextricable).

Then there is the issue of intent: what is the intent of the speaker (under the circumstances), and do LLMs have (real) intentionality, or could they?

In addition, political speech functions differently then, say speech by scientists, engineers, philosophers, teachers, and so forth, or every day speech.

True, an LLM (a "good" one) would certainly be better, as far as language output, as far as the veracity of *some* of the content – how much, and what underlies it, is up for detailed inquiry… and the truthfulness and factualness of it is of course part of this whole enormous debate and enterprise of modern “AI”. Its knowledge is ungrounded, second-hand, and text based (and basically statistical, as you mention, though with props): thus very “thin”, and unconnected with consciousness or the world.

And when we are talking about one speaker who was showing rather pronounced signs of age and not being able to maintain a line of thought due to memory or other cognitive issues, and an other speaker that had no interest in truth, facts, or uh ethical "alignment" – only rhetorical effect and manipulation of a set of potential voters you could say – the the LLM would beat them, for sure. It could be more coherent, have better access to data (factual or not), and be potentially very good at manipulation. You could program it to act any of those parts – the manipulative, rhetorical player with questionable moral scruples intent on winning at all costs, or program it to play the feeble-minded aging politician with good intentions but has lost his footing in a debate or discussion.

Expand full comment

Let me rephrase that: an LLM would be A LOT more dangerous than an "actual" candidate. Smooth delivery, seductive, even impressive language performance, but God know what's behind it in terms of facts, or cmomon sense, or creativity – and with no real understanding – and worse, whose agenda: the builders, programmers, corporation, and who is pulling the levers thereof...

Expand full comment

yes. And that is the problem with the idea that LLMs are not useful. Because if they are more accurate than a US president, what does that make humans? Stupider than a cat??? HAHAHAHAHA

Expand full comment

Perhaps the previous Democratic candidate Andrew Yang is such a candidate?

Expand full comment

LLM: Large Liar Model

Expand full comment

I would use as a starting point this profound analysis which is interesting from a slightly different perspective: little has been said about AI, but it is often underestimated that the way in which we talk about AI and any other trending social phenomenon in a moment of such relevance and which receives so many views all over the world inevitably influences opinions on a given phenomenon. People can be influenced by one of the two candidates' opinions on the topic, how they approach or will approach the tools and much more. This is why the media story, in this exponential trend of AI, is a crucial factor to take into consideration beyond the merely technological or more economic-related aspects.

Expand full comment

But America has that intellectual giant VP Kamala Harris leading the government AI initiative. I'm sure she'll do the same professional job she did as the border guru.

Expand full comment

I would put it more on the hosts, that they didn't bring up AI (I'm half way through watching and haven't heard anything to do with industry or tech) but the tech seems not to have affected people enough to make it as big a political issue as say Israel, Ukraine, or even Jan 6th.

A big theme of today is truth. Just watched a 4 hour video on how bad the Star Wars hotel was before it closed and it reminded of me of the general lack of effort and accountability in our society. With everyone dancing around a president and candidate who seems to be ill, who has time to police corporate statements. The SEC? The FTC?

Politically, AI is important to the extent that campaigns and administrations are using it. For the moment all the scary images and boilerplate text that AI generates isn't ready for small market campaign ads let alone global policy.

Expand full comment

I am sure people misuse AI but my experience with LLM fed relevant information is great ... like in every application of statistics or machine learning, the key to successful LLM applications will be carefully cleaned up data ... as a very basic example, if you feed an LLM a broken up table you're not going to get the right answer ... that's what happens when people use canned solutions for chunking ... well, to get meaningful answers you need consistent sophistication and quality controls at each stage of the process ... in other words, no matter how good GPT is, if the folks at the companies you mentioned don't know how to feed it context and prompt it properly, they'll get meaningless answers

Expand full comment

The average American voter or moderator wouldn't care about AI in a presidential debate. To the average person on the street AI is viewed as a new cool technology. Unless things reach a point where tens of millions of jobs are being replaced, and robots are wandering the streets as in BladeRunner along with life-size Cortana holograms, no presidential candidate or voter will take AI seriously.

Climate change and wars are way bigger, mounting physical threats. Over more than an hour and a half of back-and-forth, and barely an answer on climate change.

Expand full comment