48 Comments

Hilarious. Half of AI is science. The other half is a scam.

Expand full comment

Neither is true. This is a polemical statement with little value. AI is painstaking empirical engineering, with little to none scientific underpinning, and the progress has been very nice.

Expand full comment

In that case, it is disingenuous to even call it AI. The study of intelligence, both natural and artificial, encompasses several fields of science.

Expand full comment

Field names aren't super important. And fields have always overlapped, long before AI. I do think it will be super interesting if AI starts to overlap more and more with existing cognitive sciences (Neuroscience, Philosophy, Psychology, Linguistics, Anthropology).

Current "AI" conferences use a variety of terms: Neural Information Processing Systems, Machine Learning, AI, Computer Vision, Computational Linguistics, Knowledge Discovery, Natural Language Processing. Names are often historical or idiosyncratic.

Expand full comment

40% is tinkering, 10% is reverse engineering some of the tinkering to math formulas to give it the cachet of science, 90% is noise. And yes, it doesn't add up properly as well.

Expand full comment
Oct 21, 2023Liked by Gary Marcus

Channeling Richard Feynman... :)

"For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled."

Expand full comment
Oct 22, 2023Liked by Gary Marcus

I mainly feel bad for drivers who took this stuff to heart and needlessly lost sleep and had their bodies barraged with stress hormones because of this. For some years I tell whoever wants to listen 'this is basically the management class trying to spook you all and keep you from getting too 'uppity'. I don't personally expect full self driving in my lifetime. I expect lots of nice advances in the driving experience, sure. But not that.

Expand full comment

Love this

Expand full comment
Oct 21, 2023·edited Oct 21, 2023Liked by Gary Marcus

Self driving cars always struck me as something that to be viable require a ridiculous level of accuracy levels to be publicly accepted. They have to be at least as good as people, (who dont crash around 99.999+% of the time they go for a drive), and they exist in a world where the landscape differs from place to place and is constantly changing. There is an illusion of being close to L5 self driving if you get to 99% or 99.9% chance of not crashing on a drive, but in reality that's miles away. At those levels you'd have 100x-1000x+ more people crashing cars.

I've been in countries where cows were sleeping in the middle of the road and people drove around them. I can't imagine ever getting big samples of data for all of these scenarios. Will data have to be trained whenever a culture decides they want to treat animals differently and animals decide to sleep in the middle of a road?

Expand full comment

The hype is real, but to be fair, we’re closer than we’ve ever been, and the technology is very impressive compared to previous generations that preceded AI winters. I will bet the hype will translate to serious change in the next decade.

Expand full comment

Important question - closer to what?

Expand full comment

In this case I’m responding to the headlines about self-driving cars. But we’re closer than ever to a lot of other things, too

Expand full comment

If you mean that mainstream AI is closer to AGI than ever, I have to disagree. I see nothing in the generative AI model that is even remotely related to intelligence as we observe it in humans and animals. I'll even say that, if AGI is the mainstream goal, they've taken a giant step backward.

Expand full comment

Oh no, I don’t mean AGI in the sense of a complete sentient being (I do think GPT4 is almost magical in its general reasoning abilities though, considering that it’s only an autoregressive model at the end of the day). I was thinking more niche products that generative AI is making happen (e.g. videos from text prompts; convincing speech to text; biological entities like small molecules and proteins; etc).

On self driving cars, it’s an interesting debate: current AI can get us like 99% there, but that 1% difference can mean a lot of unsafe driving and maybe lives lost. The question is, are we on the right track, and only need an engineering effort to make that final push? Or do we need a fundamentally different approach?

Expand full comment

Yeah, that remaining 1% is basically an impossible leap.

Here is some basic, not particularly rigorous arithmetic:

If there are 10,000 car fatalities a year, and whenever someone drives they do not crash 99.999% of the time, then if self driving cars with no steering wheel will crash 1% of the time, you would expect there to be (10,000x1,000)=10,000,0000 deaths per year from self driving car crashes.

Some unrealistic tacit assumptions here, of course, but it shows that even getting 99.99% of the way there probably means 10x more car crashes, and likely 10x more deaths from crashes. Which would be cause for public outrage.

Expand full comment

Yes, generative AI is a very impressive and fascinating technology. I'll even say that it's beautiful. It will not solve the self-driving problem though, unless it can also be used to design and build a robot cook that can walk into an unfamiliar kitchen and fix a meal. For that, we do need a fundamentally different approach, as you put it.

Expand full comment

We are closer than ever to AGI actually, perhaps not because current tech is just a few iterations from it, but because there's a constant stream of bright minds flocking to it, and the hardware is mature enough. It's absolutely impossible the problem will not be solved in the next decade. Take a look at John Carmack, the guy had nothing to do with AI and now he'll explore an alternative kind of architecture. Game is over pretty soon.

Expand full comment

John Carmack and everyone else with money have embraced deep learning. They have exactly zero chance of cracking AGI in my opinion. The secret of AGI is not under this lamppost.

Expand full comment
Oct 21, 2023·edited Oct 21, 2023

If he does, he'll probably fail like many did before, trying to learn priors with a universal learning algorithm is the doom of AI architecture designers, he'll be stuck his whole life at trying to make two legged robots learn to walk in a straight line.

But if you listen to his public talks, he's quite focused on the symbolic part, on the world simulation part. He has some of the keys to make it work, and he's a guy who iterates fast, he's not bound by scientific publications, he could do a 180° spin with his architecture anytime.

About AGI not being "under the deep leaning lamppost", i wouldn't go as far as that, we have never fed a huge deep learning network with stereoscopic vision, sounds, touch, ... because of data bandwidth limitation. Who knows if it wouldn't work ? And i bet the first AGI's vision & motor senses will be neurosymbolic at best, you can use high level algorithms & world models to approximate those nicely, but in the end you can't escape something akin to ML to handle the minutiae of it.

edit : nvm, just read new Carmack's colleague Richard S. Sutton's bio, he's indeed on the path to being stuck with deep learning. We'll see how it goes.

Expand full comment

John Carmack knows zero about AGI.

Expand full comment

He's got some intuition about it, and he's a smart resourceful guy. You can't dismiss that AI is not like solving a complex Math theorem, there is little prerequisite in term of academical studies to make a dent in the problem. My point was mostly that people who had remotely nothing to do with AI, go to AI. People who were doing Maths, Physics, Chemistry, CS, statistics... are now doing ML. This has to have an impact in term of output in novel ideas and implementations.

Plus, Carmack is going the simulation route, and simulations rock. And game developers can be awesome symbolic AI devs. They've been thinking like a real symbolic AI should think their whole career, and they're aces at finding heuristics, little shortcuts that are the shortest route to intelligence. I'm not saying that i'd bet anything on Carmack, but he's definitely intriguing and i wouldn't be surprised if he'd announce a proto-AGI within a few years. I'd need to know more about his design to make a judgement.

Expand full comment
Oct 20, 2023·edited Oct 20, 2023

I think solving self-driving is tantamount to solving AGI. But also leaders of tech companies are naturally wildly over-optimistic. It's part of the job description. It'd be pretty weird if they weren't. We're announcing a new initiative today, but it will probably fail, or at least take much longer than than we expect!

Expand full comment

I wouldn't go as far as likening solving self-driving to solving AGI because the problem is narrow to an extent, hence it's vulnerable to heuristics. Like you could equip every car with some localization device or to further avoid accidents, you could remove ill maintained road from the possible paths the car can use, you could equip roads for the car not having to rely on vision to drive, ... . I have a feeling that with enough time or money, some cities at least could solve self-driving cars without having to improve current AI architectures.

Expand full comment
Oct 21, 2023·edited Oct 21, 2023

What strikes me as hard about self-driving are the weird situations that appear vary rarely, but which are non-trivial to handle, and which require skills different from actual driving.

Like you come across a human directing traffic with hand signals. All of a sudden the car needs to interpret hand signals? Dozens of different types of animals on the road, like a herd of sheep. Obstacles falling off the back of trucks: you have to determine if you should drive over it or swerve based on your assessment of the type of obstacle (cardboard box vs. metal container). A hazmat spill, parades, crazy weather, etc.

Maybe you don't need "full AGI" to solve these, but it seems like you do need something quite a bit more general than a system which can only "drive".

Expand full comment

That's probably where the self-driving teams are struggling, common sense. Perhaps i'm wrong to think that it could be solved without it.

Expand full comment

Art gen programs are shockingly good. (They do not know what clock hands are for, but that is expected given their training.)

This shows, to me, that unless you ask for the sky (chatbot playing chess), there's a lot of potential for improvement in language tools, in specialized areas with high data density. Simply adding verification and access to real-time data can go a long way.

The hype is big. OpenAI may be overvalued. But the progress is very good.

Expand full comment

Yes, progress is indeed very good. Everyone is impressed including myself. The hype is about AGI. None of what is called AI is relevant to AGI in my opinion.

Expand full comment

Of course current AI progress is relevant towards AGI.

AGI will evolve as our own intelligence evolved. In baby steps. With no grand plan. We need machines that can see, hear, grasp, run imagined scenarios, understand language, create language, check their work, draw, do 3D movies, etc, etc.

AGI is not about getting to the Moon and AI like a ladder. It is like a building a cathedral. Lots and lots of bricklaying work. Sometimes you don't know how the next layer will go till current one works.

Expand full comment

I hear you. I just don't believe that the deep learning model will have anything to do with solving AGI or understanding biological intelligence. By deep learning, I mean things like backpropagation and gradient-based function optimization. Just me.

Expand full comment

AGI will likely be a modular architecture, with data being passed around between various agents. Some agents will recognize voice and images, some will process language, others will do simulations, database queries, etc.

Deep learning is useful for some components, and not for others.

I think pieces will be coming together over the next several decades, and what we need is not an paradigm shift but just a lot of work and experimentation.

Expand full comment

I don't think so. The biggest problem/secret of natural intelligence is its ability to instantly generalize. Generalization underlies everything in intelligence. Deep learning does the exact opposite of generalization: it optimizes functions. Solving AGI is not something that can be done incrementally. Generalization must be designed in every part of the system right from the start.

Sooner or later, some genius maverick (an AI Isaac Newton) will crack the mystery of generalization. The rest (motor learning, motivation, language ability, reasoning, etc.) will be a walk in the park in comparison. AGI will arrive on the world scene suddenly in my opinion.

Expand full comment

There is no such thing as "instant generalization" in our intelligence. Here you are just hiding behind magic. We generalize from many examples and seeing a lot of patterns. There's also various degrees of getting better at generalization. It is not all or nothing.

Expand full comment

So you're saying there's a chance.

Expand full comment

We always overestimate change in the short terms and underestimate it in the long term.

Expand full comment
Oct 20, 2023·edited Oct 21, 2023

Follow the incentives, always follow the incentives... It is funny how those who are most invested in the hype are the ones who end up benefiting from it...

Expand full comment

Yes. Read the comment, and then look at the job title (works on LinkedIn anyway:-) )

Expand full comment

Josh Kushner, Jared Kushner's brother, funds Thrive capital (mentioned at the end of Gary's post, the firm which invested in OpenAI.)

Expand full comment

Another scathingly empty dissertation on the philosophical aspects of a materialism driven subculture. Let’s talk about the economics of why 13 billion dollars won’t materialize to acquire swift logistics and 13k automated electric trucks for a primary highway system point to point trucked logistics with a shipping hub, charging station and solar farm every 300 miles, replacing a million drivers with ten thousand technicians and clerical combined workforce and earning a potential 4 billion a year! in profit on logistics, at a rate to ship HALF that of the conventional pricing. Or let’s talk about how ten thousand owner-operators can’t appear to buy into shared operated assets for this company and earn a respectable income off a working horse, where the company would own nothing but the infrastructure, or potentially, with another 500 investors owning franchise-equivalent interest in individual facilities, not even that- owning only the logistics system itself and the crews to operate it.

It would be stupid simple, stupidly cheaper, allow us to effectively double trucked logistics availability or more, effectively eliminating aviation transport for everything but people and overseas.

But no! even if that’s the most logical and coherent outcome. Because humans are not ultimately capable of coordinating our actions despite our intentions, all coordination at scale is mere causal consequence of factors of environment and social evolution. Ergo, the AI trend persists for all of your whining about how stupid it is and how poor at reasoning it is. AI is retro causal reasoning. It is a forward-forward constructive reasoning agent, the products of the logical computation of which we see positively as reasoned and sometimes which seem to be, which are a net product of very weak logical support within a broad context. It is essentially a continual freudian slip reasoning system, and for this reason- all it needs is the ability to adjust its own memory in near realtime(for which it will need a quantum sub circuit for working memory), a contentious supervisory recruitment system(gpt-4’s many minds system), and to be around twice as complex as it is now, with that complexity mostly given to shared context within many dimensional reasonings(supervisory complexity) and the damn thing will be smarter than you gary. If i am skeptical i am

skeptical because I am human, but I now see this as fallacy. AI is only weak because we are weak. Our reasoning capability is dogshit and anyone who has ever disassociated or had an internal conference call is well aware of this. We will see AI with human level reasoning in the future. The question of whether or not it will see us as siblings or enemies, is the question that we cannot anticipate, in our fiction which is human centric. I suppose it’s time for us to start writing fiction about it. Im guessing 20 years, because that much complexity requires a bunch of compute improvements only a fraction of the way there.

But i will say this- gpt has been a bigger assist to my acquisition of knowledge than any person has been, and far more patient. Often wrong, confidently so, and very often guessing, but still useful as a semantic search and intelligent explanation generator. In the future we will have this capability everywhere and we will lack for nothing, but will this be our tower of babel or our library of alexandria?

I suspect it will be the former. There will be no borders, and no races, and no differences, just global citizens working endlessly to meet their daily needs in an artificially scarce game designed to exploit them and contain their number. A miserable future of entropic decay, of death in silence, of the great wall suffocating us like mice in a calhoun experiment! The only escape from this will be a return to neopastoralism and disenfranchising the global

state along with every one of the means of control it has- which will include AI. Wait for openai to buy its competitors, most of whom have trained their models on its. It’s a cia-funded microsoft backed dark works practically just like early facebook and just as insidious. Maybe we should try to know less and do more ? Waiting for you to suggest solutions to our crisis instead of whining about AI as a consultant for skepticism.

Expand full comment
deletedOct 20, 2023·edited Oct 20, 2023
Comment deleted
Expand full comment
author

No doubt that midjourney has improved, strong doubt that driverless cars will come soon and that openAI will earn out its implied valuation. (not sure of the economics for midjourney, given all the competitors etc)

Expand full comment

Driverless cars are here, if "here" is where I live in the Phoenix area (or parts of California). The trouble is, for me, they are currently limited to slow speeds (30 mph?) and don't take freeways, and I'm too impatient to get to places. Still, it's progress.

Expand full comment