50 Comments
⭠ Return to thread

See https://platform.openai.com/docs/guides/reasoning for a description of how it uses an LLM to parse the input prompt and then (presumably via a hallucinating transformer) creates a form of chain of thought series of "reasoning tokens" that is added to the original prompt and re-input to the transformer to further hallucinate the final output.

You get charged for all the reasoning tokens, even though you do not see them, as that page warns that you could get up to 25k reasoning tokens. It seems that they also fix the temperature setting at 1 for these preview and mini versions, which is, as we all know, the setting for being away with the fairies.

This smacks of the use of GPT4 to create the prompts for DALL-E3 in Sora, hallucinations compounding hallucinations. A recipe for catastrophe.

Expand full comment