40 Comments
Feb 9, 2023Liked by Gary Marcus

“Scaling neural network models—making them bigger—has made their faux writing more and more authoritative-sounding, but not more and more truthful.”

Hero-level posting.

Expand full comment
Feb 9, 2023·edited Feb 9, 2023Liked by Gary Marcus

Great read, although I was expecting to find the actual reason that Google bombed but Microsoft didn't. Was it because Google rolled out an inferior version of a BS generator? Or was it because Google has been gradually losing the trust of the general public?

This being said, is it just me or has anyone else noticed that deep learning is the AI technology that drives both autonomous vehicles and LLMs? In spite of the hype and the successes in some automation fields, DL has failed miserably in the intelligence arena. Isn't it time for AI to change clothes, so to speak? I got all excited when I heard that John Carmack was working on a new path to AI only to find out that he got his inspiration from OpenAI's deep learning guru, Ilya Sutskever, who gave him a reading list of 40 DL research papers. Lord have mercy.

I really don't understand the obsession with deep learning in AI research. The brain generalizes but a deep neural net optimizes objective functions. They could not be more polar opposites. I'd say it's high time for AGI researchers to drop DL and find something else to work with but maybe it's just me.

Expand full comment

¡Feliz cumpleaños! (Didn't need Google Translate for that. I uploaded that data set when I was living in Spain)

Expand full comment

In school, students are asked to show their math and provide sources for the knowledge they present. This should be expected of AI. The lack of that and the creation of fictitious scientific articles by ChatGPT is a glaring problem.

Expand full comment

I agree. LLMs are a clear and present danger to society and should be banned on the internet. Our civilization is already drowning in lies. We don't need to generate more. The worst part is that the BS that they create will be used as input data for future LLMs.

Expand full comment

People do NOT like Google. Not too many are rooting for them to win, while a lot of people not only want Bing to win, but also for them to wipe the floor with Google. Of course, Microsoft is perceived as OpenAI's benefactor and that gives them a lot of brownie points.

Google has severely exploited its monopoly, making every single person with a website dance to their tune. It's no surprise that they are at the receiving end this time, and it's been a long time coming. Nadella is making them dance and it's satisfying to watch.

Not just that, the general public now knows that Google has been sitting on their AI without releasing it to the world. Why the hoarding? Only to selfishly monetize, and not share the breathtaking capabilities of this amazing tech with ordinary people. Why then would ordinary people (that includes journalists) view Google with any charity, now that they have released some botched up thing, only because they're scared of Microsoft?

I had the same feeling after I dug into Facebook's failed Galactica release and Yann LeCun's tweets justifying its less than stellar abilities. "In what universe does "free demo" mean "ready for prime time product"?"

https://twitter.com/ylecun/status/1593412417664401408

What does that mean? That the crappy demo version is free, but later we're going to monetize it, after we fix it with your help. Whether they meant it that way or not, that's how it felt.

Companies like Google and Facebook are not loved, they're tolerated. Microsoft too, but they're currently benefitting by their association with OpenAI. They better make hay while the sun shines.

OpenAI is perceived as

- generous

- super smart

- working at the cutting edge of AI, and releasing it in a form that is accessible and useful to all.

I don't know how long they're going to remain that way, but that's the current sentiment.

As you mention in the last paragraph, the hallucinations can quickly get tiring because you can't trust anything that it says. One point to add here is that even information presented in websites require a second level fact-checking if accuracy is a concern.

AI's like ChatGPT would be excellent assistants for brain storming and producing first drafts of everything. I hope we get used to using them that way than relying on them for accurate information.

Expand full comment

It is clear we agree on almost everything here (I was just writing about something like this yesterday and mentioned Watson as well — funny).

What I am really curious about, Gary, is your "I do actually think that these problems will eventually get ironed out". Apart from the fact that humans are living proof that AGI machines are possible so it must be doable, where do you see that 'fresh discoveries' are needed from where we are now to improve on the Watson-, LLM- and other dead ends we have seen so far? E.g. do you for instance expect it to be solvable in the digital domain? (I don't, really, and like every digital computer in the end being a Turing machine, every digital AI-approach is a form of rule-based solution even if the rules are data driven or/and hidden). Could you write, maybe, why you estimate it is solvable and what the areas are that require 'fresh discoveries'?

Expand full comment
author

Closest extant paper for now is my Next Decade in AI in arxiv.

Expand full comment

Interesting. I'm going to reread it and maybe get back to it, but one question I do have: should I consider 'robust AI' as mentioned in that article as the goal to be roughly the same as 'human level general intelligence' (Gi) without self-improvement?

Expand full comment

Fully agree with the spirit of your remark. More specifically -in short - the digital computer as a (possible) model of the brain is an utterly wrong conjecture. The brain does not operate as a set of logical gates but by exploiting a variety of fundamental physical / chemical / spatial interactions that are instantiated as a set of recurively-complex networks, down to its nano structure. This extraordinary complexity is what allows it to exploit a usable form of world-model (Think Asby’s Law of requisite variety), some being innately configured from a transcription of the genetic code (“priors"), some evolving by its plasticity capability induced by, say, learning. This (non-digital) machinery is a non-computable entity except if one reduces to a very, very narrow set of behaviours, in which case the decision space collapses and becomes computable, i.e. Iff the relevant data is ergodic then it can be made to converge to a satisfactory state by exploiting statistical inference (and be useful). This is the process of "narrow AI” which collapses (become un-reliabe) when expanded to any situation that includes complexity. i.e. the realm of reality. Turing's suggestion of the "brain as a computer" has put the whole AI field on a track without an exit.

Expand full comment

I am reminded of Ludwig Wittgenstein's observations that sometimes you take a door and end up in a room with no other exits and at that point it is wise to leave the room via the door you entered. He used that analogy for 'senseless questions' (asking it is entering the room, creating answers is like trying to exit the room in any way but the way you entered, sometimes, the healthy thing to do is to notice the question is senseless and stop asking the question — i.e. leave the room through the door you entered). Human intelligence is 'malleable instinct running on malleable hardware' which is a completely different architecture than digital computers and which digital computers will not be able to mimic in a sufficient way.

Expand full comment

I like your reference to Wittgenstein’s observation. A wisdom-metaphor I have often use is Magritte’s painting “Ceci n’est pas une pipe” in his “La trahison des images” which seeks to undeline the difference between a “thing" and its model. Righly so, modern technology has induced digital computing as a compellling paradigm. However, many do not see the difference between “very powerful” and “universal”. In the case of the brain, with its unique complexity, which goes from macroscopic down to molecular (ans even perhaps more as per Penrose’s quantum conjecture for consciousness) modelling is out of reach. As a system-theoretic perspective, it is not observable (the intermediate structures are out of reach), nor is it controllable. Viewing is as a digital machine is plainly wrong.

Expand full comment

Hi Denis, indeed. Physical structures matter (pun intented :)).

Curious what you think of my paper related to this: https://www.researchgate.net/publication/357642784_Biological_Intelligence_Considered_in_Terms_of_Physical_Structures_and_Phenomena

Expand full comment

//Apart from the fact that humans are living proof that AGI machines are possible//

I have no idea why you conclude that?

Expand full comment

Because humans are biological machines that exhibit Gi?

Expand full comment

Show me your proof that we are mere machines.

Expand full comment

Of course there is no proof of that, nor of the opposite. But I think we might be going off topic here.

Expand full comment

You said that we humans prove that AGI machines are possible since we are machines ourselves. If we are not machines, then how can we have a proof that AGI machines are possible?

Expand full comment

Why are we not machines?

What is a definition of a machine? We are clearly very complex biological mechanisms, it seems to me that 'biological machine' thus covers us rather well. A virus is a biological machine (and alive under certain definitions of 'life') etc.. A human is also clearly a biological machine, but one that that is (uniquely) capable of self-reflection at a level no other machine — human invented or evolved — we know.

If you see a human as a biological 'machine' (which, apart from non-scientific — which, note, does not by definition equates false — arguments, we are), then our existence is proof Gi is possible, and if Gi is possible, AGi is as well. If you see a human as something that is more than biological, then, of course, our existence is not proof that AGi is possible.

So the statement 'humans are biological machines' leads to 'AGi is possible, because Gi exists'. If you do not accept the former, the conclusion is not acceptable to you. The former is a matter of non-scientific conviction. That doesn't mean it is wrong, it means we simply cannot know and so many convictions (beliefs) are possible. I think there is a strong arguments for the former, but there will never be proof.

The last decades we have learned a lot about the working of our intelligences, e.g. through the work of Stanislas Dehaene. There is a reason I wrote Gi with a small 'i' :-) See https://ea.rna.nl/2022/10/24/on-the-psychology-of-architecture-and-the-architecture-of-psychology/

Expand full comment

Another great challenge within your observation that “Hallucinations are in their silicon blood” comes when LLMs are called upon and used to respond to queries about issues that require responses that go beyond data, facts, and math, which can be checked and verified. Billions are being spent right now on mental health apps as a response to the growing awareness that America (and the world) is in the midst of a growing mental health crisis. Making this issue more urgent, while making the business case seemingly stronger, is the reality that America may be short of more than 4 million mental health workers. Many of these mental health apps are simply bad, some harmful, few are truly helpful. But with a veneer of authority and eloquence in responses possible with LLMs, which cannot be easily, if ever, fact checked for accuracy, we can expect the makers of these apps to embrace this tech with gusto. But the complexity of ensuring that LLMs, in their current form, are responding appropriately to an individual person with a mental health challenge is exponentially greater than the challenge of driverless cars, which you warned about early on, but failure here is much deadlier.

Expand full comment

The leader showed problems, so it was a disaster.

The challenge showed problems, so it was a newly emerging exciting contender.

Google has further to fall.

Expand full comment

Depends of what you are using these tools.

If you feed them with your thoughts and ask them to write the story for you in a certain style, it's great

If you start your journey of discovery there and are aware it could be wrong, great again.

Expand full comment

They did a big public event, mistake number one. If they want to fart, they should've farted silently like ChatGPT did.

Expand full comment

In a segment of the following podcast Kevin Roose and Casey Newton discuss and play parts of the interview with Sam Altman and Kevin Scott, Microsoft's CTO. It's an interesting interview if you want to hear the media and corporate perspective, it also helps answering why Microsoft demo was hailed by the press and not Google's.

Another interesting point is the way Kevin Scott explains how ChatGPT is integrated with search. Basically, the user prompt is fed to the model to create search queries, then the search result pages are fed back to the model to do the write up.

I think in addition to solving the need to always training with the latest data on the web, this could help avoid or minimize the hallucinations.

https://www.nytimes.com/2023/02/10/podcasts/bings-revenge-and-googles-ai-face-plant.html

Expand full comment

AI answered search will be of limited utility. Most people want comprehensive writings on the subject that they are looking up, which is something only a human writer can provide. AI can retrieve brilliant human written articles and recommend them to us but to write the article itself and for it to be of higher value than a human answer is impossible. No AI will make music better than Mozart or write a film better than Tarantino. Similarly if I intend to climb a mountain then in order to prepare I want to read an article from a climbing expert with experience, not an AI.

Its 2023 and we still clean our own behinds after excretion, machines are not ready to do everything for us just yet.

Expand full comment

I wish “The Algebraic Mind” had an audio book. I need a new one for my commute. But alas, no. Gary...get someone to record that! Or maybe I’ll break down and buy the Kindle book and keep my wife awake while I read it in bed.

Expand full comment
author

it’d be rough sledding in audio form :)

Expand full comment

Perhaps the true Turing test is whether the AI Agent can lie to you, know it did, and try to hide the lie.

Now that's scary.

Expand full comment

Happy B'day, Gary!

Expand full comment