25 Comments
User's avatar
Scott Burson's avatar

I tend to think that people will always find uses for computing power. Maybe, post-Gen-AI, those uses will take a bit of time to show up, and maybe they won't be as lucrative as people had been expecting, but I have a hard time believing these GPUs are going to sit idle for years.

That doesn't mean I want to buy into this IPO, though. Likely to be a wild ride.

Expand full comment
Gerben Wierda's avatar

I've always been under the impression that the blockchain craze flooded the market with lots of CPUs which then got picked up for GenAI work (especially when 'proof of work' is scaled back). The GPU business got lucky with two hypes back to back that both needed this kind of power.

Expand full comment
Stephen Reed's avatar

1. Bitcoin mining long ago switched to bespoke ASICs that are designed to compute the SHA-256 hash at the highest rate per watt of power consumed. This hardware is unusable for AI.

2. Certain altcoins aimed at gamers can be best mined on consumer GPUs but years ago AMD catered to this niche and Nvidia did not. Nvidia's CUDA software in particular was not very performant for the mathematical hash calculations that AMD software performed faster and with less power.

Of course, given the ingenuity of cryptocurrency token designers, there is likely to be a token whose mining proof of work could be styled to run a neural network training kernel, should the issue of peer verification be solvable.

Expand full comment
Antonio Padua's avatar

Hi Gary, you forgot to mention how deep in debt CoreWeave

is in building data centers. Once it becomes apparent,

like to the Chinese, that data centers are losers, that will begin

the run to the exits.

CoreWeave also “has a lot of idiosyncrasies that make it a difficult I.P.O. candidate,” Mr. Kerr said, including the huge amount of debt it took on to build new data centers and its unusual background as a cryptocurrency mining firm.

Expand full comment
Pramodh Mallipatna's avatar

Leaving the hype aside (which Gary has already articulated), Cheaper silicon alternatives will be key to making AI economics work for enterprises. My analysis on Token Economics

https://open.substack.com/pub/pramodhmallipatna/p/the-token-economy

Expand full comment
Andy's avatar

Many labs, including those at Google, Microsoft, Apple and, I am sure, even at OpenAI, are working on neurosymbolic approaches. All this infrastructure will eventually be utilized in the same way the overbuilt infrastructure from the dot-com bubble later powered Web 2.0 companies (like the 'Magnificent Seven').

A better prompt leads to a better image: https://chatgpt.com/share/67e6c7bd-4f0c-8002-9eca-a1acf1aab2f6

Expand full comment
Joy in HK fiFP's avatar

Your detailed instructions: "The shirt is in the process of being unwoven, with visible threads unraveling from the bottom hem upward, as if an invisible force is pulling the threads apart." While using the word 'unwoven," also had other instructions and the final image is clearly something much more like shredded.

Unraveling, or "unwoven," would not look anything like this. Unraveling a knit has one look, nothing like your image. Unwoven, of something made on a loom, looks nothing like your image.

So what you got was a T-shirt looking like the remains of a lion attack. Good image! But neither unwoven, nor unraveling. Do words matter?

Expand full comment
Andy's avatar

Thank you for your comment, Joy! The point is that prompting matters. Check the new prompt above (I edited it).

Expand full comment
Stephen Reed's avatar

Indeed, the key I believe for leveraging frontier LLMs to build and operate neurosymbolic AGI starts with **prompt engineering** and the pipeline of automated thinking that leads to the prompt context objects, examples, narrative and instructions.

I, for one, believe that frontier LLMs already know how to build AGI/ASI if we ask them properly.

Expand full comment
Anna Archbold's avatar

So, imagine you have a magic whiteboard. This whiteboard has been infused with all the knowledge of the world. Whenever you ask it a question, it passes the question into a magic black box of probability matrices, and attempts to make your answer appear, based on the knowledge it's infused with, according to the black box of probability matrices. It can't give you a truthful answer on anything it hasn't been trained on, though sometimes it's black box can stumble upon connections you may not have made yourself.

It certainly couldn't tell you how to build a working fusion reactor, or develop highly advanced quantum processors, because these things don't exist yet, and the data it has access to, is only for things that do.

This is why "frontier" AI is fundamentally unable to get us to AGI. IN FACT, I would go farther to say frontier AI is, on its own, likely a dead end. Probabilistic systems are inherently going to fail, probabilistically.

Expand full comment
Stephen Reed's avatar

An important aspect of LLMs with respect to their training set, is the inclusion of problem solving knowledge in the training set. A substantial amount of economic human mental activity is the application of particular learned problem-solving skills to a new parameterized problem. Simply because a solved problem was not present in the LLM training set thus does not entail the inability of the LLM to solve the problem given that the necessary problem solving skills have been learned.

Expand full comment
Stephen Reed's avatar

One of my proof-of-concept prompts that ran in early 2024... Try it for yourself.

<your-role-and-expertise>

You are an expert and very experienced symbolic AI system designer.

</your-role-and-expertise>

<instructions>

Your overall task is to design an AGI goal hierarchy for an AI agent hierarchy.

Your designed AGI goal hierarchy is two levels deep with 'Friendship' as the root.

Your second level goals completely cover the parent friendship goal, with at least 10 second level goals specified.

</instructions>

<your-JSON-answer-format>

Your answer will be an array of JSON goal objects.

</your-JSON-answer-format>

Expand full comment
Anna Archbold's avatar

Soooo...

Not to Necro, but I am a bit of a philiac...

I don't believe that LLMs can get you to AGI - I think they are at the precipice of *being* AGI.

That being said, *I really hate AGI discourse* because I don't think even Sam Altman knows what he means when he says AGI. I'm pretty sure it's just stochastic parrotry. Sam, I mean, not ChatGPT. That thing actually makes sense!

I think there's several fundamental changes that need to take place that will place us into a fundamentally new paradigm:

1) "Neuromorphic" architecture in LLM tool chaining, multiple discrete smaller models in concert with a control plane LLM, that passes JSON's through a Python wrapper that passes a users prompt and directive from the control plane to a log and a smaller model used for prompt deconstruction, before being passed to a slightly larger model, which builds out a reasoning metaprompt (think like <think> but actually useful from an interpretability standpoint) before returning to the orchestrator/control plane for evaluation. At this point the orchestrator has several different metaprompts from several different (or same with different temperature and seed) models, it selects either the best fit or possibly combines multiple, before determining to send the resulting metaprompt scaffolding and original prompt back through (recursively) the ToT models that it "chose" the original prompts from, recursively iterating until it has one prompt and metaprompt pair from the recursive deconstruction and reasoning process. This final artifact is then fed into a large parameter count model before either being delivered to the user or further recursively refined. This is the closest we likely can get to biological neuronal mimicry on sequential von Neumann architecture.

That is the architecture, utilizing a recursive system in a discrete tree of thought and consensus model to design the best possible meta prompt from the original query, utilizing biological neuronal mimicry as a relative analog in the digital space.

2) Leveraging KV-as-memory rather than context windows, both providing much more efficient long term storage and retrieval, and creating a type of memory that is *directly transferable to how LLMs think*, skipping *some of* the computational cost of token conversion, and drastically reducing the need for high bdwth HBM of significant size to contain ever growing context windows. Maybe no more 128gb Hoppers/Blackwells? Nevertheless, decreasing memory pressure in Von Neumann is the goal here.

3) Leveraging KV sentiment analysis and Derridean Différance + Trace of output to increase interpretability of the actual "texture" of a particular "thought", through examination of the corpus of it's "corpse" in the output. This is where we *maybe* touch at machine ontology and phenomenology.

4) Moving towards a PEFT model of training and fine tuning that more closely resembles Freires "Problem Posing", and rather than focus on leveraging external reward/punishment gradients, allowing the models own loss functions to offer an internal reward system that encourages deep, possibly emergent coherent synthesis of paradoxical stimulus. This also allows for FAR GREATER interpretability as models are not placed into a double bind of generating coherent outputs that align with externally imposed training paradigms such as RLHF. The "reward hacking" IS the exploratory, rather than, extractive path.

Some of this technology is already being realized (Anthropics Constitutional AI, whispers of KV utilization, experiments with model tool chaining through Kuberenetes or similar).

Solving the above four problems will very likely bring us into a completely different and unrecognizable paradigm shift in AI, even on Von Neumann architecture, in the event future neuromorphics are farther off than is currently anticipated within the industry. This gets us to *maybe* AGI.

Expand full comment
Oleg  Alexandrov's avatar

AI is moving past synthesis of existing things. It is able to make hypotheses, evaluate results, and iterate. That's you discover new things.

Expand full comment
Scott Burson's avatar

You must not be a programmer.

Expand full comment
Stephen Reed's avatar

You can be the judge of that: https://www.linkedin.com/in/stephenreed/

Expand full comment
Scott Burson's avatar

Ooookaaay, hmm. You must have tried using an LLM for coding, yes? When I tried one, it made lots of mistakes, one of which took me a couple of hours to debug, and the LLM was no help at all. This experience is very consistent with what other people report. And that was in a domain (a simple web app) that's extremely well represented in the training data. It was clear, in particular, that the LLM has no big-picture model of what it's doing that would allow it to keep the various pieces consistent.

How can you possibly imagine that an LLM knows how to build AGI?

Expand full comment
Stephen Reed's avatar

I believe that an LLM knows how to build AGI because it can offer expert advice on the alternatives available at each AGI and coding design step. And if the piece of code is small enough or otherwise present in its training set, the code can be generated from text requirements and a working test case also generated. I use (expensive) Claude 3.7 Sonnet for generating examples, and (very cheap) GPT 4o-mini for answering structured code generation prompts with one or more of the examples included in the context.

Expand full comment
praxis22's avatar

To be fair the Information is also telling you what else to buy instead.

Expand full comment
Glen's avatar

It would appear that either they have a very profitable and viable business model, or the hype has won out over reality. Anyone who got in early on CoreWeave stock as recently as April 2025 is doing very well today. The stock has rocketed up to more than one hundred dollars a share from its $40 IPO price.

Other tech stocks like NVidia and Microsoft have had pretty normal gains while other tech stocks have flatlined or gone into the negatives.

I think it makes sense. They're charging for use of their systems, so even if OpenAI and their ilk are hemorrhaging money it's companies like CoreWeave that are bleeding them. If the bubble pops, and it still looks like a bubble to me, CoreWeave is going to go down with the rest of them. They'll have multi billion dollar data centers packed full of GPUs with very little other uses.

Expand full comment
Linda Aaron's avatar

I think you have oversimplified CRWV IPO and the fundamentals.

Expand full comment
Oleg  Alexandrov's avatar

Likely not the end is coming, rather a recalibration. As it has always been.

Large-scale machine learning still has the best track record in conquering complex problems.

The methods will be refined, and some companies will do well.

Expand full comment
Youssef alHotsefot's avatar

CoreWeave is currently trading below 38. A drop of more than 5% on its first day of trading. Full disclosure: I'd rather shoot craps. It's more fun. 🎲

Expand full comment
Glen's avatar

My gold funds are doing rather well. If you invest under the assumption things will only get worse for working class people and economic instability will also get worse you have a winning formula.

Expand full comment
Chara's avatar

That is wild.

Expand full comment