22 Comments
⭠ Return to thread

Perhaps each "prior" has its own model, essentially a Go-like game of rules, state evaluation functions, and goals. Human-level intelligence could be achieved by linking up these prior models into a supermodel.

The reference problem isn't as hard as it appears. Wordvecs offer a good example of dimensionally reduced arrays that can act as semantic references. From a "priors" perspective, how can an evolved human trait like greed or ambition refer to modern objects? How can I crave an iPhone? We must have innate circuitry that trains the "craveables" vector in order for our "priors" to reference it.

Many good ideas are floating around -- game engines, agent-based representations -- that would serve to augment current data-driven/statistical methods like transformers, etc. Perhaps priors are mental agents, actively searching for their instances. Without mental representations, you can't handle counterfactuals and "I noticed the train didn't pass at 2am as it usually does".

Quite a few mental agents would be needed: causality, space, time, objects, numbers, agency, facial recognition, family, stranger, trust, anger, greed, official, accidental vs deliberate action, desire to walk, awe, shame, regret, desire to lead, desire to follow, need for approval, hunger, depression, happiness, love, explore vs exploit, fight or flight, etc.

Priors are scary, because they cover human motivations, drives, fears, prejudices, goals, emotions, and feelings. But you can't understand human gossip unless you understand human nature. You don't have to admire Donald Trump to appreciate that humans often suffer from craven impulses, ambition, and greed.

Expand full comment