Discussion about this post

User's avatar
Youssef alHoutsefot's avatar

Excellent. Again.

"Broad, shallow intelligence (BSI) is a mouthful"

Let's simplify the acronym by shortening it. I think that BS covers the issue nicely. At least for the present.

Expand full comment
Y. Auh's avatar

LLMs don't have intelligence. They are still just programs which can answer questions posed in ordinary languages. The answers are not reliable. Because they don't have internal structured models, they cannot tell whether they are truthfull or not. They are not hallucinating. It's a typical case of anthropomorphism. Even young children know when they are making up stories or not.

LLMs don't have capability to reflect on their own operation.

Typical LLM products have case-by-case fixes to deal with known hallucination cases. They are like Amazon's automated shops which are maintained by remote humans.

LLMs are hugely wasteful. They consume huge amount of electicity and water for frivolous queries of questionable value.

We need to recognize that LLMs are just tools to generate draft texts under proper constraints.

A fraction of investments could be directed to proper academic researches of human intelligence and knowledge for greater value. By neglecting proper researches, we are hurting ourselves.

Expand full comment
87 more comments...

No posts