1 Comment
⭠ Return to thread

https://jamesclear.com/all-models-are-wrong

"In 1976, a British statistician named George Box wrote the famous line, “All models are wrong, some are useful.”

His point was that we should focus more on whether something can be applied to everyday life in a useful manner rather than debating endlessly if an answer is correct in all cases. As historian Yuval Noah Harari puts it, “Scientists generally agree that no theory is 100 percent correct. Thus, the real test of knowledge is not truth, but utility. Science gives us power. The more useful that power, the better the science.” "

Its likely this is a detour on the road to AGI: so what? Some can pursue AGI while others create useful tools. People build tools on current technology in general even while others work to improve technology.

You note: "solve any of the core problems of truthfulness and reliability". Humans aren't entirely truthful or reliable and yet they are sometimes useful. There are concerns over a replication crisis even in the world of science and flaws noted in the peer review process. Humans are still trying to figure out the best approach to collectively seek reliable information while seeking "truth". Humans don't always agree on results using a judicial process to seek "truth".

Humans in general often don't agree on what is truthful or reliable so putting that as a necessary hurdle to achieve is setting an impossible goal and possibly attaching a constraint that would also detour from the path towards AGI.

In the meantime: people need to grasp that machines can be fallible just like humans. They can compare human sources of information, machine sources, etc. Machines can aid with that process. Yes: tools and methods should be created acknowledging the reality of potential harms, just as people do already regarding other technology. People create anti-virus software and spam filters, etc.

The tech is invented by people trying to solve real world problems: regulators don't invent the tech and usually merely distract from the problem and can in fact detract from it. Regulatory capture often leads to big players shutting out competitors so despite myths, often big players want regulation. Unfortunately some humans don't try to find reliable information on all aspects of subjects they write about.

Admittedly of course increasing reliability and accuracy regarding reality is a goal to strive for since we'd like to improve on human reasoning: and to allow for instance humility for them to consider the reality they may be wrong or unreliable due to flaws in the world or themselves, just as humans should spend more time on, especially those who are listened to by the public.

You noted that a prior comment I had was "condescending": but I'd suggest the reaction is due to the one many have that your comments are implicitly condescending in the sense of not actually considering or addressing anything other than strawman versions of critiques against your writings.

Expand full comment