44 Comments

2024

Me: Alexa, tell me about information science

Alexa: here’s some misinformation and pseudoscience.

Expand full comment

Intelligence (which I equate with problem-solving) has 3 scalable dimensions: (1) "inventiveness" (basically, how good the underlying problem-solving mechanism ("inventor") is at solving any problem X given information Y), (2) the knowledge/information Y that guides the inventor towards solutions, and (3) the physical resources (including time, energy, compute etc) that are/may be consumed by/during the problem-solving process. Given that they are engaged in a race to AGI, the major AI labs always go for the low-hanging fruit, such as (a) scraping data from the internet, (b) synthesising new data, (c) throwing $$$ at compute, and (now) extending compute time, i.e. (2), (2), (3), and (3). Anything, of course, but (1), i.e. the *actual* problem. Because (1) is hard!

Expand full comment

LLMs have undoubtedly trained on a gargantuan number of books.

Surely they must have come across George Polya’s “How to Solve it” in their scraping of the World Wide Web.

But has any of the LLM developers ever actually read the book?

The reason I ask is because Polya begins with this:

“First, you have to UNDERSTAND the problem.”

Does any of the LLM developers actually understand general intelligence? (The problem they at least CLAIM to be solving)

Does ANYONE actually understand general intelligence?

Expand full comment

Is it possible to achieve a general purpose problem solver if one does not follow the basic procedure for solving problems in one’s attempts to solve the problem of the general purpose problem solver?

Will random attempts be able to achieve it?

Expand full comment

The AI Brigade already came up with the General Problem Solver back in the 50s. The problem was the General Problem Solver could not solve problems generally.

https://en.wikipedia.org/wiki/General_Problem_Solver

Expand full comment

Ha ha ha!

Expand full comment

Gary have you seen this study?

Microsoft’s AI digital assistant, Copilot, has raised concerns among Microsoft's employees and executives about its ability to deliver on its ambitions of being the product that will “fundamentally transform our relationship with technology.” According to an October 2024 survey by Gartner, just four out of 123 IT leaders said Copilot provided significant value to their companies.

Expand full comment

link? fits in my next essay

Expand full comment

Why do so many people here post comments about articles, studies, whatever, without links?

It's weird.

Expand full comment

I just came across a video of a talk by Thomas Friedman is which he did a very hard sell of the idea that we will have AGI in the next few years and that it will "transform everything". I really don't understand where this is coming from. It seems to be a kind of naive (or perhaps sinister) stand in for the likely outcome: that (possibly very) useful AI will make many people more productive and that "everything" will be "transformed" only because those with power will demand that everyone do more, faster, and better — or be replaced. In other words, it sounds to me like it's a way of framing what is in fact a likely power grab by capital and elites as a kind of technological inevitability that we will just have to come to terms with, rather than something that is fundamentally political and social, that we (ought to) have the power to resist or shape to our benefit.

Expand full comment

I really don't understand where this is coming from“

I can’t speak for anyone else, but I don’t wish to know where what Friedman writes is coming from.

Expand full comment

Intriguing spotlight. Ultimately, can test time compute (time * multiple output evaluations) deliver sustainable improvements.

Intriguing indeed.

Expand full comment

Precisely…

“…whereupon I spent the rest of the evening smearing water around hundreds of plates.” 🍽️

A beautiful and memorable analogy for where LLMs are at the moment. We are left with saturated aprons at best.

This is what it means to “think” and consider” - putativus - without the application of human consciousness, character and clarity.

The wonderment may be winding down. As others just try and move stuff around and AI wash everything in its wake.

Expand full comment

All the scaling has failed since general intelligence is just that. Factual statements by LLM are missing the human test. They must get the facts right. The missing component is explicit knowledge. Only semantic AI models (SAMs) have the ability to certify results. Here is a way that a SAM could improve training while markedly cutting costs.

http://aicyc.org/2024/10/05/llm-training-cost-reduction-using-semantic-ai-model-sam-knowledge-graph/

Expand full comment

The efforts to develop artificial intelligence without regard for truth are very reminiscent of a recently espoused philosophy of education in the US that holds that one can teach “critical thinking” without teaching knowledge of the world.

Perhaps the former is a result of the latter.

Expand full comment

And when I say recent, I’m talking about recent decades

Expand full comment

Scaling is religion and test time training is the second coming of Jesus Christ, here to save LLMs from original spin.

Expand full comment

The thought of a young Steve Pinker “drying” dishes like Sisyphus has me laughing

Expand full comment

Many of the teams pursuing AGI continue to equate linguistic competence with intelligence (the Turing paradigm of intelligence). More charitably, perhaps they hope that intelligence will 'emerge' beyond linguistic competence as they scale further.

More interestingly, researchers such as Demis Hassabis seem to think that AGI will emerge if LLMs are "grounded" through broader foundation models incorporating, for example, visual images. While this looks like a more useful approach in the long term, human language itself isn't grounded, which is why LLMs work in the first place, so I suspect intelligence is going to need a lot more engineering than just mushing together some different representations and keeping your fingers crossed.

Expand full comment

Or mushing together some investors and keeping your fingers crossed (perhaps for a different reason)

Expand full comment

I was in a library of a friend, an anthropologist, a few years back doing some research. It was fantastic, huge bookcases on wheels in a huge old warehouse in SF.

We had a conversation about what she had in her possession - a massive history of gay and lesbian life, literature and art. What was startling to me (and amusing to her) was so little of it was available “online” in any form. Archivists and researchers know this. AI scientists, not so much about the reality of archives of language.

I also buy rare books from a massive institution which receives books and periodicals and sells them to research institutions - Bolerium Books in SF. I can safely say that the vast majority of what they have in their storage is not “online” in any form. That is why it is rare commodity which institutions protect.

When we say that systems such as gpt-4 have been trained “on the corpus of human knowledge”, that’s manifestly false. It’s neither trained on all content that, to pick a random company - Springer-Verlag has in science and medicine, nor in the proceedings of IEEE over its history, in Engineering. Its knowledge base can barely handle, to pick random topics, vapor deposition technology, or find-grained detail of endocrinology.

Not even close.

It has very little of centuries of English-language archives in an enormous variety of subjects. What it does have is “internet”, which has moved it over the finish line for amazing language capability in some highly contemporary areas - Python programming for instance; 30+ years old.

I think the architecture can be scaled much more impressively over time, but it will increasingly lack what I think of as real world physics. Neural mechanisms we humans use to process math, for example, and vision overlap. Until these systems are more embodied (no, I don’t mean multimodal) the scaling/performance issue seems to be just straightforward exhaustion with training data, which cannot be merely synthetic.

I don’t see AGP anytime soon, because the architecture for that isn’t well represented by current architectures.

Intel is in crisis because they failed to forecast extremely low power arrays of processors architectures, and lost 50% or more of the compute market into cellphones, and they are losing another huge swath of the market to NVIDIA architectures.

NVIDIA will erupt into crisis when a new ultra-low-power architecture emerges closer to the wattage human brains use, which spend very little energy on “Default Mode Networks”, obvious modeling of reality and auto regressive conditioning.

I’m not sure we are yet capable of modeling that kind of architecture - we are to good at remixing what we know.

Expand full comment

The librarian at that library had better check all electronics at the door because OpenAI, DeepMind and the others are always on the hunt for more data and they absolutely HATE paying for it.

They would steal their own grandmother’s pie recipes if they had the chance.

Expand full comment

The next couple of levels are Crown Jewels of many collections, and devilishly difficult to access, yet are part of common language. It’s even hard to find indices on materials in research.

When 95% of WorldCAT has been absorbed then we might be able to say we have reached a limit but academic press - arguably the bulk of human research knowledge - is blithely behind paywalls and paper walls.

I’m sure a billion here and a billion there are imminent for use.

Expand full comment

I guess we will have to find new uses for Altmanium.

Expand full comment

It might be a good additive to “Dr. Frankenstein’s Magic Elixir of Life”

Although it would probably make the patients prone to hallucination.

But that’s a downside we just have to live with, right?

Expand full comment

Lysergic acid DieLLMide

Expand full comment

Maybe it's something like Adamantium? I'd like that, as long as we are just making shit up. I always wanted to be more like Hugh Jackman. He gets all the girls.

Expand full comment

Look at them yo-yos! That ain’t workin’ They play with training on their LLM. That ain’t workin’ That’s the way you do it. Money for nothing and your data for free.

With sincere appologies to Dire Straights

Expand full comment

When in dire straights , simply scale it up.

Expand full comment

The idea is to get the equivalent of Alphazero for LLM's, via reinforcement learning and self play, by incorporating a reward and penalty function. Now if the result is anything like Alphazero, then well...

Expand full comment

It’s an interesting comparison. Alphazero is able to learn how to play games like Go and chess better than humans, without possessing any identifiable intelligence as such. Perhaps we need to take a leaf out of Rodney Brooks’s playbook and go back to embodiment approaches. Personally, I feel that Boston Dynamics is a screaming bargain relative to the valuations placed against all these LLM shops.

Expand full comment

Amusing to note a Google Search with the terms:

scaling laws -language

retrieves an actual definition of Scaling Laws:

"Scaling laws are relations between physical quantities in which all the physical quantities appear in terms of powers, whereby a power of x is expressed in the form xα where α is a real number."

as the top response. The second response is a link to:

https://garymarcus.substack.com/p/a-new-ai-scaling-law-shell-game

Expand full comment

all the physical quantities appear in terms of powers, whereby a power of x is expressed in the form xα where α is a real number."

Except in the case of LLMs, x is not a physical quantity and the power “a” to which x is raised is an imaginary number.

Other than that, it’s exactly the same

Expand full comment

Excellent

Expand full comment

Excellent

Expand full comment