49 Comments

Hilarious. Half of AI is science. The other half is a scam.

Expand full comment
Oct 21, 2023Liked by Gary Marcus

Channeling Richard Feynman... :)

"For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled."

Expand full comment
Oct 22, 2023Liked by Gary Marcus

I mainly feel bad for drivers who took this stuff to heart and needlessly lost sleep and had their bodies barraged with stress hormones because of this. For some years I tell whoever wants to listen 'this is basically the management class trying to spook you all and keep you from getting too 'uppity'. I don't personally expect full self driving in my lifetime. I expect lots of nice advances in the driving experience, sure. But not that.

Expand full comment

Love this

Expand full comment
Oct 21, 2023·edited Oct 21, 2023Liked by Gary Marcus

Self driving cars always struck me as something that to be viable require a ridiculous level of accuracy levels to be publicly accepted. They have to be at least as good as people, (who dont crash around 99.999+% of the time they go for a drive), and they exist in a world where the landscape differs from place to place and is constantly changing. There is an illusion of being close to L5 self driving if you get to 99% or 99.9% chance of not crashing on a drive, but in reality that's miles away. At those levels you'd have 100x-1000x+ more people crashing cars.

I've been in countries where cows were sleeping in the middle of the road and people drove around them. I can't imagine ever getting big samples of data for all of these scenarios. Will data have to be trained whenever a culture decides they want to treat animals differently and animals decide to sleep in the middle of a road?

Expand full comment

The hype is real, but to be fair, we’re closer than we’ve ever been, and the technology is very impressive compared to previous generations that preceded AI winters. I will bet the hype will translate to serious change in the next decade.

Expand full comment
Oct 20, 2023·edited Oct 20, 2023

I think solving self-driving is tantamount to solving AGI. But also leaders of tech companies are naturally wildly over-optimistic. It's part of the job description. It'd be pretty weird if they weren't. We're announcing a new initiative today, but it will probably fail, or at least take much longer than than we expect!

Expand full comment

Art gen programs are shockingly good. (They do not know what clock hands are for, but that is expected given their training.)

This shows, to me, that unless you ask for the sky (chatbot playing chess), there's a lot of potential for improvement in language tools, in specialized areas with high data density. Simply adding verification and access to real-time data can go a long way.

The hype is big. OpenAI may be overvalued. But the progress is very good.

Expand full comment

So you're saying there's a chance.

Expand full comment

We always overestimate change in the short terms and underestimate it in the long term.

Expand full comment
Oct 20, 2023·edited Oct 21, 2023

Follow the incentives, always follow the incentives... It is funny how those who are most invested in the hype are the ones who end up benefiting from it...

Expand full comment

Josh Kushner, Jared Kushner's brother, funds Thrive capital (mentioned at the end of Gary's post, the firm which invested in OpenAI.)

Expand full comment

Another scathingly empty dissertation on the philosophical aspects of a materialism driven subculture. Let’s talk about the economics of why 13 billion dollars won’t materialize to acquire swift logistics and 13k automated electric trucks for a primary highway system point to point trucked logistics with a shipping hub, charging station and solar farm every 300 miles, replacing a million drivers with ten thousand technicians and clerical combined workforce and earning a potential 4 billion a year! in profit on logistics, at a rate to ship HALF that of the conventional pricing. Or let’s talk about how ten thousand owner-operators can’t appear to buy into shared operated assets for this company and earn a respectable income off a working horse, where the company would own nothing but the infrastructure, or potentially, with another 500 investors owning franchise-equivalent interest in individual facilities, not even that- owning only the logistics system itself and the crews to operate it.

It would be stupid simple, stupidly cheaper, allow us to effectively double trucked logistics availability or more, effectively eliminating aviation transport for everything but people and overseas.

But no! even if that’s the most logical and coherent outcome. Because humans are not ultimately capable of coordinating our actions despite our intentions, all coordination at scale is mere causal consequence of factors of environment and social evolution. Ergo, the AI trend persists for all of your whining about how stupid it is and how poor at reasoning it is. AI is retro causal reasoning. It is a forward-forward constructive reasoning agent, the products of the logical computation of which we see positively as reasoned and sometimes which seem to be, which are a net product of very weak logical support within a broad context. It is essentially a continual freudian slip reasoning system, and for this reason- all it needs is the ability to adjust its own memory in near realtime(for which it will need a quantum sub circuit for working memory), a contentious supervisory recruitment system(gpt-4’s many minds system), and to be around twice as complex as it is now, with that complexity mostly given to shared context within many dimensional reasonings(supervisory complexity) and the damn thing will be smarter than you gary. If i am skeptical i am

skeptical because I am human, but I now see this as fallacy. AI is only weak because we are weak. Our reasoning capability is dogshit and anyone who has ever disassociated or had an internal conference call is well aware of this. We will see AI with human level reasoning in the future. The question of whether or not it will see us as siblings or enemies, is the question that we cannot anticipate, in our fiction which is human centric. I suppose it’s time for us to start writing fiction about it. Im guessing 20 years, because that much complexity requires a bunch of compute improvements only a fraction of the way there.

But i will say this- gpt has been a bigger assist to my acquisition of knowledge than any person has been, and far more patient. Often wrong, confidently so, and very often guessing, but still useful as a semantic search and intelligent explanation generator. In the future we will have this capability everywhere and we will lack for nothing, but will this be our tower of babel or our library of alexandria?

I suspect it will be the former. There will be no borders, and no races, and no differences, just global citizens working endlessly to meet their daily needs in an artificially scarce game designed to exploit them and contain their number. A miserable future of entropic decay, of death in silence, of the great wall suffocating us like mice in a calhoun experiment! The only escape from this will be a return to neopastoralism and disenfranchising the global

state along with every one of the means of control it has- which will include AI. Wait for openai to buy its competitors, most of whom have trained their models on its. It’s a cia-funded microsoft backed dark works practically just like early facebook and just as insidious. Maybe we should try to know less and do more ? Waiting for you to suggest solutions to our crisis instead of whining about AI as a consultant for skepticism.

Expand full comment

How Long DOES ..

Expand full comment
deletedOct 20, 2023·edited Oct 20, 2023
Comment deleted
Expand full comment