Mostly agreed, with the caveats that (a) we don't yet understand how to combine LLMs with those other mechanisms in a way that will work, and (b) even when we make some progress on that question, I think it's still going to be incremental; I would not yet use words like "likely ... close to human cognitive capabilities".
Mostly agreed, with the caveats that (a) we don't yet understand how to combine LLMs with those other mechanisms in a way that will work, and (b) even when we make some progress on that question, I think it's still going to be incremental; I would not yet use words like "likely ... close to human cognitive capabilities".
Mostly agreed, with the caveats that (a) we don't yet understand how to combine LLMs with those other mechanisms in a way that will work, and (b) even when we make some progress on that question, I think it's still going to be incremental; I would not yet use words like "likely ... close to human cognitive capabilities".
Yes, I'm speculating here (and will probably regret it quite soon). If past experience is a guide, we will discover yet more pieces that are needed.