40 Comments
Feb 9, 2023Liked by Gary Marcus

“Scaling neural network models—making them bigger—has made their faux writing more and more authoritative-sounding, but not more and more truthful.”

Hero-level posting.

Expand full comment
Feb 9, 2023·edited Feb 9, 2023Liked by Gary Marcus

Great read, although I was expecting to find the actual reason that Google bombed but Microsoft didn't. Was it because Google rolled out an inferior version of a BS generator? Or was it because Google has been gradually losing the trust of the general public?

This being said, is it just me or has anyone else noticed that deep learning is the AI technology that drives both autonomous vehicles and LLMs? In spite of the hype and the successes in some automation fields, DL has failed miserably in the intelligence arena. Isn't it time for AI to change clothes, so to speak? I got all excited when I heard that John Carmack was working on a new path to AI only to find out that he got his inspiration from OpenAI's deep learning guru, Ilya Sutskever, who gave him a reading list of 40 DL research papers. Lord have mercy.

I really don't understand the obsession with deep learning in AI research. The brain generalizes but a deep neural net optimizes objective functions. They could not be more polar opposites. I'd say it's high time for AGI researchers to drop DL and find something else to work with but maybe it's just me.

Expand full comment

¡Feliz cumpleaños! (Didn't need Google Translate for that. I uploaded that data set when I was living in Spain)

Expand full comment

In school, students are asked to show their math and provide sources for the knowledge they present. This should be expected of AI. The lack of that and the creation of fictitious scientific articles by ChatGPT is a glaring problem.

Expand full comment

People do NOT like Google. Not too many are rooting for them to win, while a lot of people not only want Bing to win, but also for them to wipe the floor with Google. Of course, Microsoft is perceived as OpenAI's benefactor and that gives them a lot of brownie points.

Google has severely exploited its monopoly, making every single person with a website dance to their tune. It's no surprise that they are at the receiving end this time, and it's been a long time coming. Nadella is making them dance and it's satisfying to watch.

Not just that, the general public now knows that Google has been sitting on their AI without releasing it to the world. Why the hoarding? Only to selfishly monetize, and not share the breathtaking capabilities of this amazing tech with ordinary people. Why then would ordinary people (that includes journalists) view Google with any charity, now that they have released some botched up thing, only because they're scared of Microsoft?

I had the same feeling after I dug into Facebook's failed Galactica release and Yann LeCun's tweets justifying its less than stellar abilities. "In what universe does "free demo" mean "ready for prime time product"?"

https://twitter.com/ylecun/status/1593412417664401408

What does that mean? That the crappy demo version is free, but later we're going to monetize it, after we fix it with your help. Whether they meant it that way or not, that's how it felt.

Companies like Google and Facebook are not loved, they're tolerated. Microsoft too, but they're currently benefitting by their association with OpenAI. They better make hay while the sun shines.

OpenAI is perceived as

- generous

- super smart

- working at the cutting edge of AI, and releasing it in a form that is accessible and useful to all.

I don't know how long they're going to remain that way, but that's the current sentiment.

As you mention in the last paragraph, the hallucinations can quickly get tiring because you can't trust anything that it says. One point to add here is that even information presented in websites require a second level fact-checking if accuracy is a concern.

AI's like ChatGPT would be excellent assistants for brain storming and producing first drafts of everything. I hope we get used to using them that way than relying on them for accurate information.

Expand full comment

It is clear we agree on almost everything here (I was just writing about something like this yesterday and mentioned Watson as well — funny).

What I am really curious about, Gary, is your "I do actually think that these problems will eventually get ironed out". Apart from the fact that humans are living proof that AGI machines are possible so it must be doable, where do you see that 'fresh discoveries' are needed from where we are now to improve on the Watson-, LLM- and other dead ends we have seen so far? E.g. do you for instance expect it to be solvable in the digital domain? (I don't, really, and like every digital computer in the end being a Turing machine, every digital AI-approach is a form of rule-based solution even if the rules are data driven or/and hidden). Could you write, maybe, why you estimate it is solvable and what the areas are that require 'fresh discoveries'?

Expand full comment

Another great challenge within your observation that “Hallucinations are in their silicon blood” comes when LLMs are called upon and used to respond to queries about issues that require responses that go beyond data, facts, and math, which can be checked and verified. Billions are being spent right now on mental health apps as a response to the growing awareness that America (and the world) is in the midst of a growing mental health crisis. Making this issue more urgent, while making the business case seemingly stronger, is the reality that America may be short of more than 4 million mental health workers. Many of these mental health apps are simply bad, some harmful, few are truly helpful. But with a veneer of authority and eloquence in responses possible with LLMs, which cannot be easily, if ever, fact checked for accuracy, we can expect the makers of these apps to embrace this tech with gusto. But the complexity of ensuring that LLMs, in their current form, are responding appropriately to an individual person with a mental health challenge is exponentially greater than the challenge of driverless cars, which you warned about early on, but failure here is much deadlier.

Expand full comment

The leader showed problems, so it was a disaster.

The challenge showed problems, so it was a newly emerging exciting contender.

Google has further to fall.

Expand full comment

Depends of what you are using these tools.

If you feed them with your thoughts and ask them to write the story for you in a certain style, it's great

If you start your journey of discovery there and are aware it could be wrong, great again.

Expand full comment

They did a big public event, mistake number one. If they want to fart, they should've farted silently like ChatGPT did.

Expand full comment

In a segment of the following podcast Kevin Roose and Casey Newton discuss and play parts of the interview with Sam Altman and Kevin Scott, Microsoft's CTO. It's an interesting interview if you want to hear the media and corporate perspective, it also helps answering why Microsoft demo was hailed by the press and not Google's.

Another interesting point is the way Kevin Scott explains how ChatGPT is integrated with search. Basically, the user prompt is fed to the model to create search queries, then the search result pages are fed back to the model to do the write up.

I think in addition to solving the need to always training with the latest data on the web, this could help avoid or minimize the hallucinations.

https://www.nytimes.com/2023/02/10/podcasts/bings-revenge-and-googles-ai-face-plant.html

Expand full comment

AI answered search will be of limited utility. Most people want comprehensive writings on the subject that they are looking up, which is something only a human writer can provide. AI can retrieve brilliant human written articles and recommend them to us but to write the article itself and for it to be of higher value than a human answer is impossible. No AI will make music better than Mozart or write a film better than Tarantino. Similarly if I intend to climb a mountain then in order to prepare I want to read an article from a climbing expert with experience, not an AI.

Its 2023 and we still clean our own behinds after excretion, machines are not ready to do everything for us just yet.

Expand full comment

I wish “The Algebraic Mind” had an audio book. I need a new one for my commute. But alas, no. Gary...get someone to record that! Or maybe I’ll break down and buy the Kindle book and keep my wife awake while I read it in bed.

Expand full comment

Perhaps the true Turing test is whether the AI Agent can lie to you, know it did, and try to hide the lie.

Now that's scary.

Expand full comment

Happy B'day, Gary!

Expand full comment