157 Comments
User's avatar
Matthew Kastor's avatar

Nice! How about you, me, and your investor friend create an AI company and just start making things up. I'll write software, you do philosophy and innovation, and your other friend can go tell VC that we're about to build a Dyson Sphere and tap into the consciousness of the universe or something. All we need is 14 Triliion dollars. 😎

Expand full comment
Antonio Eleuteri's avatar

Count me in! I have a PhD in statistics and I am not afraid to use it! (I can create convincing numbers while invoking mysterious statistical laws...)

Expand full comment
Larry Jewett's avatar

You have clearly reached the third level when it comes to lies.

But are you sure you have what it takes to reach the next level?-- OpenA-lies

Expand full comment
Antonio Eleuteri's avatar

I have been working on quackery, but I admit I am not very good at it, (un)fortunately.

Expand full comment
Larry Jewett's avatar

Maybe you could do an internship

Expand full comment
Matthew Kastor's avatar

Yesssss! We'll definitely need someone to create a Goldberg Machine of number puzzles to keep the quants happy. This is the key to getting featured on Bloomberg. If we could get you to wear a chefs uniform as well, then we'll be featured in Forbes in no time!

Expand full comment
Antonio Eleuteri's avatar

Ohh I love a chef's uniform! (this appeals to my Italian heart: food IS a religion for us, after all ;) )

Expand full comment
Joy in HK fiFP's avatar

Let's do it!

Expand full comment
Matthew Kastor's avatar

Lol as long as someone is giving away money for made up numbers and we get to sit around in air conditioned offices... it's too bad WeWork went under, we could have saved a lot on beer and really got the innovation flowing for free. 🤣

But for real though, I'm in. I'll never run away from free money.

Expand full comment
Craig's avatar

Would you hire me? I will work for coffee, cigarettes, and "exposure".

No seriously I haven't had a job since the Marines in 2011 I need to be a part of something even if its nonsense

Expand full comment
Matthew Kastor's avatar

Deal! We'll need an insider with the military industrial complex when the spooks get mad that we're gobbling up all the free money. 😎

Expand full comment
guts's avatar

Sir, I’m a tech intern looking for a full-time role. I bring not just tech skills, but Taoist and Hindu wisdom. Blending ancient chakra techniques with AI automation to unlock higher states and ultimately become the creator of consciousness.

Expand full comment
Larry Jewett's avatar

What about Buddhist teachings?

Seems like the Dal-AI LLaMa would have some relevant things to say on the subject.

Expand full comment
Matthew Kastor's avatar

You're like 1 degree away from being vice president of PR. Would you be willing to recruit evangelists and set up some kind of tantric scandal to distract the media from our broken promises and flame broiled books, on a schedule? 😀

Expand full comment
Stephen Thair's avatar

You need to throw in some quantum BS as well to stay ahead of the zeitgeist...

Perhaps "holistic quantum consciousness AGI" or "massively-parallel quantum-accelerated neurolinguistic AI encoding".

The best bit is no-one will ever even ask what the terms mean, they'll just throw money at you!

Expand full comment
Matthew Kastor's avatar

"Schrodingers Think Tank". See, we take ALL the data, put it in the box, and tell the public they're too stupid to use the superintelligence. They'll try everything from asking it to talk dirty, draw dirty pictures, write political fan fiction, and replace their spouses who can't stand them because they spend all day trying to "integrate" in a superposition of being very much alive, but physically and mentally inert in front of screens. As users get dumber, our system will seem relatively more and more intelligent. 😎

Expand full comment
Larry Jewett's avatar

Schroedinger's Bot:

An AI in a stuperposition of Superintelligent and Superstupid.

You can never be sure which it will be until after the "collapse of the botfunction" --ie, after you have submitted the prompt and received the output

Expand full comment
Larry Jewett's avatar

Known as the "AI measurement problem"

Expand full comment
guts's avatar

I’m down for the job sir. Student account perks for AI have my back.

Expand full comment
Chara's avatar

Can I work for said company? 😂

Expand full comment
Matthew Kastor's avatar

Hired!

Expand full comment
Chara's avatar

Why thank you! 😂

Expand full comment
Sparks Fly Upwards's avatar

Yo, hit me up, I'm your Dyson Sphere architect.

Expand full comment
Matthew Kastor's avatar

Our team will have the biggest most powerful spheres the solar system has ever seen! 😎

Expand full comment
TJ's avatar

Gary Marcus already did this with Noam Chomsky’s linguistics backwash

Expand full comment
Matthew Kastor's avatar

Umm, no he didn't. There's no Dyson sphere anywhere. What are you talking about?

Expand full comment
Dean Hull's avatar

I was amazed that the actual business press didn't do any of the math. Just hastily published a number of fluff pieces on how everyone was just awestruck by the forward-looking (!) projections. As much as I want to just blame Tech Bros, you can't help but equally blame capital markets, analysts/press who do effectively zero research, and the passive investment rules that elongate (and worsen) bubbles. I don't see how this doesn't end very, very bad.

Expand full comment
Mehdididit's avatar

They never do! Remember the breathless coverage every time Trump announced a new trade deal? As if the Australians would agree to buy billions of dollars of American beef. In Australia, and most of the world, pasture raised, grass fed beef isn’t a premium product. It’s how you raise cattle. A feed lot in most other countries is a pasture.

And sure, the Japanese annd the Europeans are going to spend billions on enormous trucks and SUVS that nobody drives in Japan or Europe and that won’t even fit on the roads there.

We live in the world of just make it up, people will believe it. Then we wonder why AI is plagued with hallucinations.

Expand full comment
manuel albarracin's avatar

Regarding analysts, investors and business press the current situation brings to mind “The Big Short”. It’s arguable what can get to be more esoteric and high-inducing, CDOs on subprime mortgage loans or AGI’s pursuit.

Expand full comment
ardj's avatar

I think you'll find that the Financial Times usually gets this right. They were downright mocking about Oracle yesterday; while they skewered OpenAI over a month ago: https://www.ft.com/content/76dd6aed-f60e-487b-be1b-e3ec92168c11

Expand full comment
Joe's avatar

It's beyond absurd. They numbers can't make sense. How does anyone believe any of this??

Expand full comment
Dean Hull's avatar

I’m generally not one to entertain conspiracy theories of nefarious scheming but - against my better judgement - I keep thinking maybe there is something fishy going on. 😵‍💫😵‍💫

Expand full comment
Oaktown's avatar

I'm amazed anybody still takes the business press seriously. They did the same thing in the run-up to 2008, the Enron debacle, and the 1929 crash.

I do appreciate Ed Elson's take on these things, though (Prof G Markets podcast). He made many of the same observations Gary makes here a couple days ago; he's not always right about his predictions, but he always admits it when he isn't.

Expand full comment
Future of Citizenship's avatar

Time to short the market.

Expand full comment
Chris Blue's avatar

The market will remain irrational longer than you’ll remain solvent

Expand full comment
Paul Jurczak's avatar

Irrational? What if propping up the big investors of the ruling class is the rationale? Agree that socializing the losses still has a lot of life left.

Expand full comment
Roy Royerson's avatar

The pathways to us the taxpayers paying for all this shit are numerous. There will be special facility for this or that, Open AI is essential for security and prosperity of the US society and whatnot, and you end up losing money on your short in addition to being on hook for Larry's billions.

All that said, I am in.

Expand full comment
Patrick Logan's avatar

Cisco should have sunk in the dotcom boom. However today it's "worth" well over $200B. Nothing makes sense in tech finance.

Expand full comment
PJ's avatar

Cisco's current valuation makes sense. Been a hardstuck dinosaur for years now.

Expand full comment
Patrick Logan's avatar

Cisco's value increased dramatically during the dotcom bubble based largely on wild claims and misleading statements. When the bubble crashed a lot of people lost a lot of money.

Expand full comment
Gerben Wierda's avatar

My personal estimate of the bubble bursting remains (since 2023) to be roughly fall 2027 (give or take a year). Shorting this from outside the USD-area seems risky though, as valuations of these stocks are in USD and a 50% decline in stock value combined with a 50% decline in USD value (which is also a risk given what the US political domain is doing)

Expand full comment
TheAISlop's avatar

Banger Gary! -OpenAI doesn’t have $300 billion dollars

They don’t have anywhere near $300 billion dollars---

I'll add SoftBank doesn't have $300 billion either. And SoftBank perpetuated 2025 spend.

Expand full comment
Larry Jewett's avatar

They don't even have $300, at least not of their own money.

Expand full comment
Diamantino Almeida's avatar

If people start realizing that GenAI is essentially a digital generative parrot mimicking language without true intelligence or understanding the bubble could burst even faster than many expect. The hype built around these models often paints them as approaching AGI or transformational intelligence, but the reality is far more modest they generate convincing text by pattern matching at scale, not by thinking or reasoning deeply.

From my perspective, this disconnect between promise and reality is a ticking time bomb.

The market right now is pricing in growth based on optimism, not on the hard truth that this technology, impressive as it is, remains fundamentally limited.

This isn’t just pessimism it’s a grounded view based on what I see as the reality behind the curtain of AI hype. The promise of AGI has been prematurely conflated with today's generative models, and that gap, once fully recognized, could trigger the market to recalibrate its expectations dramatically.

This is why I feel the current era is less about a tech revolution and more about a moment of reckoning with what generative AI actually is a highly advanced, but ultimately statistical, parrot.

Expand full comment
Guidothekp's avatar

Enron accounting.

Expand full comment
Jim Carmine's avatar

I entirely agree! Sorry Grok, Gravity is real. We are looking at another .Com bust within the next 18 months at most. "This is a different kind of reality than the one physical gravity imposes, and it highlights the powerful, complex relationship between our minds and bodies." Gemini

Expand full comment
Aaron Turner's avatar

We all know it's coming, the question is when? 12 months? 5 years? After the VCs have finally pulled the plug, then what happens? Distressed pivot from the AGI dream (Plan A) to LLMs fine-tuned for gazillions of ANI apps (Plan B) in order to salvage as much of the VCs' cash as possible? (All of which will of course be just as poorly aligned as the giant LLMs were, inexorably inflicting societal damage on a global scale.) Will the non-AI CEOs then pivot from FOMO on AI to actually hiring humans again? Will government policy finally pivot from exploitation ("winning the AI race") to meaningful (even global) AI regulation? And will there be research funding for non-LLM-based AI research, such as neurosymbolic architectures, once this LLM nonsense is finally behind us?

Expand full comment
Wes Hook's avatar

Won't happen until the "last fool" in the greater fool chain is in and the original investors cash out........Oh, wait, aren't the OpenAI stockholders allowed to sell $18B of their stock on the next raise? (Correct my number if I'm not remembering correctly.) And isn't SoftBank still holding $30B of their $40B investment? So, who is the last greatest fool? US Government?

Expand full comment
Larry Jewett's avatar

The American taxpayer is always -- in all ways -- the last greatest fool.

We are the Charlie Browns to the Lucys pulling out the football at the last minute every damned time.

Expand full comment
Oleg  Alexandrov's avatar

To add to the other comment, I think neurosymbolic is likely more wishful thinking than anything else. That was considered the default path to AGI till recently, and the AI field was seriously moribund before large scale statistics gave it a jolt. Now, putting neurosymbolic on top of existing efforts, that will be fun to watch.

Expand full comment
Oleg  Alexandrov's avatar

"Distressed pivot from the AGI dream (Plan A) to LLMs fine-tuned for gazillions of ANI apps (Plan B)"

This was probably the plan all along, with AGI being a smoke screen. Investors are a jaded bunch, usually, and they will demand concrete uses and returns.

I will also question if we will ever get to AGI via some elegant discovery. Likely exhaustively exploring what today's lesser methods can and cannot do are at least a prerequisite. I think they will also go a long way.

Expand full comment
RCThweatt's avatar

There's possibly a certain cognitive dissonance holding this up...it can't be that insane, can it?

Since we are a country that elected Trump, twice, yes, we are insane.

Expand full comment
James Jameson's avatar

may God hear you. I am sick of this crap being shoved down my throat. mediocre tech.

Expand full comment
Mark Montgomery's avatar

I'm right with you on this, Gary -- it was truly bizarre watching this unfold yesterday. The Economist has a good article today "What if the $3trn AI investment boom goes wrong?" I posted it on LI with the following comment.

~~~~

This is a good piece from a market and economist perspective. It asks the right questions and provides logical scenarios. I'll offer some technical guidance.

"The technology may evolve in ways that investors do not expect. When alternating current eventually prevailed...direct-current electricity firms were overshadowed and forced to consolidate. Today investors reckon that the probable ai winners are those that can run the biggest models. But, as we report this week, early adopters are turning to smaller language models, which could suggest that less computing capacity may be needed after all."

As I often point out, despite the obsession with language models today, we think they'll play a small role moving forward. The other 7 functions in our KOS provide a higher ROI for enterprises. LLMs are already commoditized. The cost and revenue will plummet with the exception of narrow AI specialties with high-value outcomes like accelerating discovery.

"Or the road to widespread adoption could be slower and bumpier than investors expect, giving today’s ai laggards a fighting chance...difficulty of quickly supplying electric power, or managerial inertia could mean that take-up is more gradual than first hoped...The flow of capital could slow; some startups, struggling under the weight of losses, could fold altogether."

This is already occurring in LLMs, but we are seeing more interest -- as it should be.

"What would such an ai chill be like? For a start, a lot of today’s spending could prove worthless. After its 19th-century railway mania, Britain was left with track, tunnels and bridges; much of this serves passengers today. Bits and bytes still whizz through the fibre-optic networks built in the dotcom years. The ai boom may leave a less enduring legacy. Although the shells of data centres and new power capacity could find other uses, more than half the capex splurge has been on servers and specialised chips that become obsolete in a few years."

Precisely the point I've made here repeatedly that should serve as a warning to investors and lenders. I was reviewing industrial bonds yesterday and noticed that some separate RE and equipment while others appear to be treated the same. Hardware exceeds 60% of total investment for LLM DCs, and unlike land, power and buildings, has a short fuse. Some are replacing high-end chips annually, some 2-3 years, & others 5+ years. It depends on the specific computing needs. Our KOS employs high-end chips for a small portion of overall compute. Due to our focus on high quality data, we use much less power, water and hardware -- much smaller DCs, environmental footprint, and capex.

That the LLM bubble will implode is almost a certainty. Vast sums will be lost. What remains to be seen is whether we can pivot and scale the next gen of efficient AI systems like or KOS. Let's hope so. Our economy may depend on it.

https://www.linkedin.com/posts/markamontgomery_what-if-the-3trn-ai-investment-boom-goes-activity-7371927393978134528-w-fI

Expand full comment
Houston Wood's avatar

I read through 41 comments, all agreeing with GaryMarcus' latest post at Marcus On AI. Isn't that strange--all the skeptics are there together, cheering each other on, while elsewhere those holding other positions are gathered together cheering their fellow-believers on.

Where can I go to read dialogue and discussion? For it just may be that there is more uncertainty about this than Marcus on AI allows, as there is usually more uncertainty about Everything in the Future than most people can tolerate.

Expand full comment
Dean's avatar

It's the internet. There is no place for dialogue or discussion anymore. You post something and then watch AI bots dunk on each other.

Expand full comment
Roy Royerson's avatar

Are we short on AI enthusiasts suddenly? My LinkedIn feed suggests otherwise.

Expand full comment
Larry Jewett's avatar

As Hamlet once said, "There are more AI enthusiasts in Heaven and earth than are dreamt of in some people's philosophy"

Expand full comment
manuel albarracin's avatar

I would think that most read him (me included) anticipating validation of our own scepticism. A richer debate probably requires some other medium.

Expand full comment
Houston Wood's avatar

Yes, I guess me too. But of course, there will be a correction, maybe gigantic, in the next 5 years or so. But after that? Won't AI emerge to swallow our lives much like the Internet arose to swallow us after the doc.com bust? His view is seeming to me increasingly narrow and self-serving--like a pop star that just keeps singing variation of his first breakout hit.

Expand full comment
Alex Tolley's avatar

I suppose the US Treasury could "invest" with all those "$Ts" of tariffs? ROTFL.

OTOH, NVIDIA needs to manufacture...erm...TSMC needs to manufacture all those GPUs. Good time for China to invade Taiwan, collapse that supply chain, and with it the US stock market?

What could go wrong?

Expand full comment
Guidothekp's avatar

They are making themselves TBTF. They are also our next gen military leaders (they hold Pentagon titles). So they are making themselves key for national defense. On top of that, they are also planting articles that are doing what Altman did with AGI -- lowerig expectations. Just yesterday, I came across a piece that AI alone will not help US catch up with China.

This time around, we should probably nationalize the bros, combine them into one governmental organization and avoid any more moral hazards.

Expand full comment
Barn Owl's avatar

As I watched Oracle stock 'increase' on Tuesday, at the time up $180 billion or so, I wanted to put this number in perspective.

If everyone of working age in my province and its neighbor, British Columbia and Alberta (about 6 million people) worked full time at minimum wage for a year, well--the shareholders of Oracle made the same in half an hour. [And there I'm treating the two currencies at parity, so it's really 38% more.]

I'm still around tech types in this substack, but capitalism is completely broken. The taxation system that 'absorbs' excess money and recirculates it to public benefit is completely broken. The US is supposedly in an economic renaissance of some kind and still runs huge deficits.

A capital gain should not be taxed at half the rate of the workers incomes' I mentioned above. It disgusts me deeply. It is so badly designed that it has to be intentional, of course.

Expand full comment
Catherine Blanche King's avatar

So glad I found you "guys" (<-meaning all-gender). It's what I have thought for a long time. Will do my own research, but am wondering what is the the basic cognitional theory/philosophy the are basing their analyses on ( if any?).

Expand full comment
RCThweatt's avatar

Maybe Locke's tabla rasa (blank slate) model of the human mind, knowledge and intelligence are all due to learning (which doesn't seem to be usefully defined). This is wrong, both a priori, and empirically. We're born with a lot already hard wired, just like our fellow animals. How else could it work? That's the a priori part.

Expand full comment
Catherine Blanche King's avatar

RCT: Thanks for your reply. As you seem to imply, and unless there is much more pending than Locke could supply, methinks we need a philosophical reboot.

Expand full comment
RCThweatt's avatar

Besides Locke, there’s B. F. Skinner. I recall back in the day seeing a Skinnerian say on TV, “Psychologically, all behavior is learned”.

Not that the AI/LLM guys have necessarily read much of philosophy or psychology. That may be their problem, they’re making assumptions someone who had studied them wouldn’t make.

Expand full comment
Catherine Blanche King's avatar

RC: I am inclined to agree "That may be their problem." I must confess, the philosophical basis of our current set of misunderstandings, especially in the neurosciences, has been my study for a long time. My thought is that, since Locke, Skinner, and a wealth of others' writings between then and now, (and the reason I am drawn to Marcus' ideas in this case), on the face of it, I could not believe that these guys are throwing so much money and effort at something they have no scientific, much less psychological or philosophical evidence for; that there is an evidenced background to be had to support what Marcus is saying up front; and that is (apparently) intuitive for anyone who knows, for instance, the tulip situation, or what it means to put the cart before the horse.

I am also on-board, so to speak, with the integrative place of mathematics and music in the natural order--I have it on my list to check our your reference, though I am not a mathematician.

Truer words were never spoken: "Not that the AI/LLM guys have necessarily read much of philosophy or psychology. That may be their problem, they’re making assumptions someone who had studied them wouldn’t make." Indeed and in spades.

Expand full comment
Bruce Cohen's avatar

As far as I can tell few if any of the technical people in the LLM industry have studied neuroscience, and most (possibly including Geoffrey Hinton, who ought to know better) think that artificial neural nodes are a reasonably faithful model of biological neurons.

Expand full comment
Catherine Blanche King's avatar

To Bruce Cohen. From just my own brief perusal of new stuff going on in the natural sciences (including connections between the biological sciences and quantum) the fields are exploding with exceptional research, papers, and breakthrough ideas both verified and speculative (so far). On the other hand, the questions that spring from human intelligence, along with other forms of intelligence, though related to and in some ways dependent on physics and the natural world (and sentience), takes us far beyond those worlds--and though statistical sciences are grand, they still don't "do the trick."

It seems to me the purveyors of those big ideas, while remarkably "smart" in their own ways, are already running into the theoretical wall that is made of the combination of absences and mistakes that, in turn, fuel a lack of understanding of the whole of human intelligence, including major insights and corrections in philosophy and the offerings of the human sciences that have also burgeoned in the last century and this one.

And then there's history, where Marcus seems to have understood the importance of paying attention there. I love a technological and physical/natural sciences kind of education; however, in my view, an absence of those other human things in those specialist studies that underpin them, over the last 70 or so years, is showing up in those who speak for and work in the fields now, and in the headlines of the New York Times. I've gone on too long . . . sorry. But thank you for your thoughtful comment.

Expand full comment
Bruce Cohen's avatar

I think a large part of the problem is the tendency to hold onto initial assumptions made 50-75 years ago that need to be re-evaluated or just dropped. One of the most important such IMO is the assumption (practically a dogma) that the brain and nervous system are computational systems, meaning they can be analyzed using the tools of Turing completeness, undecidability, automata theory, etc. I personally doubt this true (I.e. the brain is not a computer), but at the very least it seems necessary to find some experimental evidence for it.

Expand full comment
Catherine Blanche King's avatar

Bruce: Yes, yes, and yes. Also, those "initial assumptions" are probably more "initial" than one might think. Philosophical confusions have been with us for centuries, can occur early in childhood and, if not corrected, tend to "morph" into all sorts of conflictive assumptions as one develops while "standing" on those deeper philosophical oversights and mistakes. (What could go wrong?)

Also, in my view, and as an adjunct to the above, until we know more about what occurs to our intelligent accumulations of meaning/memory while we sleep, we won't be able to consider duplicating or making better a way to have a real conversation with another human being. And that's just a start.

The upshot for today's "Gary Marcus" ideas, however, is not abstract in the negative sense of that term, but is that (it seems to me and from my long-and-ongoing theoretical study of it) all that cognitional stuff needs to be worked out with some serious clarity before running over to Wall Street with one's bank account on fire.

Expand full comment
A Thornton's avatar

AFAICT the belief Mathematics, i.e., Exclusive Middle Logic and Set Theory, is all that is required to replicate genotypic human behavior, e.g., Language.

Expand full comment
Catherine Blanche King's avatar

A. Thornton: Thanks, I'll look it up.

Expand full comment
A Thornton's avatar

Sources that may be of help:

"The Church-Turing Thesis: Breaking the Myth" by Goldin and Wegner

"Beyond Sets: A Venture in Collection-Theoretic Revisionism" by Rescher and Grim

Interdisciplinary Science Reviews, Volume 46, Issue 1-2 (2021) "Artificial Intelligence & its Discontents, Thematic Issue Editor Shunryu Colin Garvey"

AI Brigade hasn't written anything directly. It's all assumption and misdirection.

Expand full comment
Herbert Roitblat's avatar

We lose money on every transaction, but we make it up in the volume.

One problem is that the fundamental faith in scaling is misplaced. You, Gary, point out the financial parts of this, but Nvidia only produces about 4 million of the necessary GPUs per year. How long would it take them to produce 8 million? How many will be needed? According to the GPT-4 technical report, it took 10,000 x more compute to move from GPT 3.5 to GPT 4, which, on their measure produced a 1-bit reduction in error. According to that same curve, the next 1-bit will require 100 quadrillion times more compute. Even by their definition of intelligence, it improves linearly as a function of exponential increases in compute. Great for the chip sellers, but unsustainable as a business. Something's got to change.

I think that we need a change of paradigm. The incumbent scaling paradigm will not be getting much better, but the costs are enormous and they will grow. I have some ideas of what this new paradigm could look like and have done some pilot work on it. This new paradigm is dramatically less demanding computationally and is more reliable than the current crop of token guessers. I know that you and others have alternatives in mind, as well. We may be able to avoid a bubble bust if we can adapt the right changes. Let's not just decry the coming problems, but let's work on ways to fix them. Instead of digging the current hole deeper, let's dig in a different place. Maybe we can find treasure there.

Expand full comment
Friedrich Schieck's avatar

Example: The Copernican Revolution!

Copernicus formulated the heliocentric world view. He placed the sun at the center of the universe, around which the planets, including Earth, revolve. This idea represented a radical break with the previously dominant geocentric world view, in which Earth was considered the immovable center around which everything else revolved.

Galileo Galilei confirmed Copernicus' heliocentric world view through his observations with the telescope. However, he was forced to recant his findings and was sentenced to lifelong house arrest because he questioned the existing world view of the Church and thus the balance of power.

Giordano Bruno advocated the Copernican heliocentric system and expanded it into a metaphysical doctrine of the infinite multiplicity of worlds. He opposed the geocentric world view and professed his belief in the Copernican theory. On February 17, 1600, Giordano Bruno was burned at the stake in Campo de' Fiori.

Going against prevailing opinion and questioning existing power structures can be dangerous, especially when many investors stand to lose a great deal of money. As the saying goes: What must not be, cannot be!

Expand full comment