Discussion about this post

User's avatar
Dakara's avatar

"On a good scientific calculator, every single number would be in green."

So many have argued over this recently. Critics often will say, "well I can't do that in my head". That's right, you would ask for calculator, because you have self-reflection for your abilities and understanding for the tools that provide those abilities.

This led me to write "Why Don't LLMs Ask For Calculators?" - https://www.mindprison.cc/p/why-llms-dont-ask-for-calculators

It is another simple example that completely exposes the lack of any reasoning. Do LLMs know that they are bad at math? Yes. They will state so, based on their training of course. Do LLMs know what a calculator is? Also Yes. And they still can't place these two concepts together to realize they are likely giving your wrong answers.

Expand full comment
Jeanne Dietsch's avatar

Again I ask, why is the goal to make AI as smart as humans? As CEO of an intelligent robotics company, my question was similar: why make androids? The point of automation is to solve problems. General purpose humans, or general purpose human brains, are hardly the best solutions to human problems, or market opportunities. It's like building a mechanical super-horse instead of inventing a Tesla.

Even for the transhumanists seeking to build the next generation human 2.0, the basis for evolutionary symbiosis between two entities is a division of labor. We should be figuring out which jobs AI and humans each excel at and transitioning toward a symbiont that combines the two (or three, if including robotics.)

Expand full comment
168 more comments...

No posts