31 Comments

So let me get this straight. In April, OpenAI raised $300M against a 29B valuation. And now, 5 months later, employees are selling shares against a 90B evaluation.

If the company really tripled in valuation in 5 months, you would never sell right?

Expand full comment

Well. It might not be that simple. OpenAI is not planning to go public anytime soon. This could be the only near-term opportunity for liquidity for many employees. Potential future wealth may be amazing, but for many people it is not as great as wealth now. 😀

In addition, as a capped for-profit company, it may be that some of the employees that have shares that have already maxed out in potential value somewhere below that $80-90B valuation, so there would be no reason to wait for a higher valuation.

Granted, we can also consider why this hasn't been made available to employees in earlier rounds which may also have made either of these two factors relevant. This may help some employees get liquidity but it also would set a higher valuation for a new funding round to enable OpenAI to raise cash for acquisitions.

Expand full comment

Hallucinations are not a "huge unsolved issue" but a "huge unsolvable issue" given how RNNs (including transformer LLMs) work. I read somewhere (don't remember why) that Sam Altman has started to rebrand hallucination as feature ('creativity' — which is defendable I think) already and not as a problem that needs to be solved.

Expand full comment

The 90B evaluation of Open AI itself is a big hallucination :)

Expand full comment

Yes, an unsolvable issue; it's inherent in the technology. I've got a brief post in which I argue that not only are such "hallucinations" natural to LLMs, they're "natural" for us as well. Neuroscientists have identified a complex of brain regions they call the default mode network, which is running while we're daydreaming, among other things. In default mode, we confabulate. It's the fact that we must constantly communicate with others that keeps us from getting trapped in default mode.

https://new-savanna.blogspot.com/2023/09/a-quick-remark-on-so-called.html

Expand full comment

Hallucinations can be a feature and not a bug.

Expand full comment

Well, yeah, sure, they could be. But only in retrospective wishful thinking. No one would offer that kind of justification if we could turn them on and off at will.

Expand full comment

As others have said here, there are ways to reduce hallucinations but as you suggest, they won't go away. Nor will falsehoods in the Encyclopedia Britannica and Wikipedia. I hope this finally convinces people to verify information. As long as you do that, the benefits are still strong for many LLM-based applications.

Expand full comment
Comment removed
September 27, 2023Edited
Comment removed
Expand full comment

But 3 and 4 take place outside the model, no?

Expand full comment

"LLMs will likely never deliver on the promise that they are somehow close to AGI."

That's being generous. There is nothing in LLMs that is even remotely connected to solving AGI. Zilch. This insane valuation is based mostly on hype.

If anything, LLMs (and generative AI in general) are a complete waste of time and resources if AGI is the goal. That has been my position since the beginning of the deep learning revolution. This does not mean that the technology is useless or unimportant. It is just irrelevant to cracking AGI.

Expand full comment

As a general rule, net biz works kinda like this...

1) get in to a fad early

2) build like crazy

3) cash out at the peak of the fad

4) repeat

If OpenAI employees are in a position to secure their family's future now, grab the money and run seems a pretty good plan.

Expand full comment

I said it was wise for them

Expand full comment

And I agreed.

Expand full comment

The costs will go down for these LLM's as hardware improves and more efficient algorithms are found. So the operating costs will decrease as they have been significantly over the last few years. It's definitely a profitable business for all the coders out there that will be paying $20 a month for it, and the businesses that want synthetic data. So there's a viable business there. It's kinda risky valuing openAI that high because they were only slightly ahead of the pack, and there's no reason to believe that could continue when other companies like Facebook and Google can catch up very easily. The smaller companies like Huggingface are quite promising as well.

Expand full comment

90B in profit, though? Viable business for a company valued at 5b, maybe.

Expand full comment

I have no idea, 5b sounds like it could be right. Have you considered shorting the market?

Expand full comment

The difference between human hallucinations (e.g. dreams and daydreams) and LLM hallucinations is that for the vast majority of humans the vast majority the time, humans can tell when they're hallucinating. LLMs can't.

There are potential solutions to this, as Andy X Andersen outlined in the comments. Yet as Bill Benzon noted, some (if not all) of the most practical of these technically take place outside the model.

However, the fact that they're external isn't really a blocker to implementation; in fact, there are already validation steps outside the model to restrict the output, and to the end user they're transparent. So, I expect we'll see external validation to reduce (but not eliminate) hallucinations.

I doubt, though, that will reduce the legal liability enough to prevent some companies from banning use of LLMs by their employees at work (which quite a few already do). No matter how much insulation you add, making shit up doesn't play well with corporate CYA.

Expand full comment

Counting money is so uncool already. “This is the magic of big data” (Medium.com 2023). Something that tech-gurus often parrot, about bypassing the need to actually understand anything in detail. Why? Because you can simply place your ‘faith’ in the “truth of digital information” (Wired Magazine). ‘End of Theory’ by Chris Anderson (2008) speculated about a future without the need for “scientific method and dedication”. As erroneous or eerie, the moronic vision points to a constant stream of validation and “post truth”. Earlier we placed our faith in sky gods and now in big data, AI and information technology. To me it seems that big data (as well as it’s fallacy) are a logical outcome of Reductionism. Hedging on the belief that complex systems can be understood (and also mimicked) if we dismantle, study and copy each element in isolation. Agreed that such a practice sounds great, but only if it can keep pace with our experience and reality. But it is proving to be insufficient from the onset.

Expand full comment

Silly thinking. If I were an employee owning an early-stage chunk valued at 90-BILLION dollars of course I would be trying to sell a slice of that wad, even if I thought it was going to the moon!! Sell a third, it will be life changing, and if the keeps going up, you will be even richer with the remaining 2/3rds. That is a smart strategy no matter what you think of your companies chances.

(still I also worry about 90-billion. that is google level equity, but will all the capital chasing this dream why do we expect the winner to command a google-like lead in this new market.... unless Open-AI remains technologically superior.. in which case it could easily be worth that. (but I am doubting it will remain technologically in a different category.)

Expand full comment

Quick side bar: “The profit isn’t there. Supposedly OpenAI is on track to make a billion dollars in revenue this year, but…” if companies like OpenAI with big tech partnerships who can sprinkle AI in any of their huge product bases are struggling to make profit, imagine the other side - “open source” players aren’t making any revenue they are just raising money rounds after rounds.

Expand full comment

I feel you may be right here. On a more personal note, what do you think about AI systems training their systems on your own writing? This very post for example. It's already started. Read about this important topic in the my latest offering: https://boodsy.substack.com/p/the-ai-bots-are-coming-for-your-substack

Expand full comment

> LLMs will likely never deliver on the promise that they are somehow close to AGI.

What do you recommend in terms of neural architectures to get compositional, casual, symbolic representation and reasoning?

LLMs are amazing next word predictor models and any problems related to language where the accurate meaning of words isn't the most important (grammar, translation, poems, summarization, expansion e.t.c)

Expand full comment

OpenAI would be much better if it was only trained on open source information. Top of the list for chatgpt sorcee is NYT, CNN, and the other MSM propaganda outlets. Ask Chatgpt yourself. It’s honest in this regard. 😎.

Expand full comment

The training issue is a killer. How do you keep a model up to date? Right now you have to retrain the whole thing from scratch. As you point out, Gary, that's expensive. Are they going to do that yearly? Every two, three, five years? It's a problem inherent in neural architectures, where processing is spread over each "neuron." The issue's been known for a long time. It's clearly stated in that 1988 Fodor and Pylyshyn article. I know of some work in image processing directed to mitigating the problem. I have no idea whether anyone's pursing that in the transformer space.

See Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition, https://doi.org/10.3390/jimaging8040093

Expand full comment

Stupid question: how are they selling shares if it's not a publicly-tradable company?

Expand full comment

it’s called a secondary sale; it’s done privately, has to be authorized by the company but it’s not that uncommon, and they (and many others) did it before.

Expand full comment

ChatGPT Costs a Whopping $700,000/Day to Operate, Says Research

https://analyticsdrift.com/chatgpt-costs-a-whopping-700000-day-to-operate-says-research

That's $21 million dollars a month. So they'd need about a million subscribers at $20/month to break even.

Anybody know how many paying subs they currently have?

Expand full comment

They likely lose money on subs but make money off the business end, selling API calls.

Expand full comment

Ok, that's useful input I hadn't considered, thanks.

Does anybody have any idea how many paying general public users they have? Or is that privately held info?

Expand full comment

I have an Altavista-slash-Gopher feeling about them.

Expand full comment