To use your insight here, this is where I claim we have to put the effort in to examine situations in detail - i.e. define the "wall".
I do a lot of "transcription" editing. It typically takes me 4 hours to edit a 1 hour meeting transcript. But, again, a "human", with any I.Q., that is not familiar with the topic matter of …
To use your insight here, this is where I claim we have to put the effort in to examine situations in detail - i.e. define the "wall".
I do a lot of "transcription" editing. It typically takes me 4 hours to edit a 1 hour meeting transcript. But, again, a "human", with any I.Q., that is not familiar with the topic matter of the discussion, can easily produce "wrong" summaries.
All AI applications, that create such "transcriptions", as far as I know, are directed by an "AI prompt", which is created by a "conscious human". Based on my work with "consiousness", we can get a hint about the "wall" by posing the following question, "How would we formulate an "AI prompt" that accurately generates "AI prompts"?
I agree with the examine - IMO the problem is not the errors, but our inability to understand what the errors will be. This also extends to humans (or classes of?). This comparison is where the action is, so to say. The recursive prompting idea seems to be gravely worrisome, as it will enthrone even further bad ideas.
Keith. Again, I agree with you. But, that’s my point. Our goal should not yet be the outcome. The goal should be understanding the “logic” that creates the “inability to understand what the errors will be.” Ironically, I think we’ll find it is not much different from the reasons so many differences in views in science and social understanding continue to evade us.
As for your “gravely worrisome” concern about “recursive prompting”, again, I’m not pushing to understand how to actualize it. I’m raising the “recursive” model as a tool to understand it. The reason I’m suggesting it is, in my new model for human thinking, it was the understanding of “recursion” in the human brain that answered so many questions. [ https://www.academia.edu/112492199/A3_A_New_Theory_of_Consciousness ]
Keith. I agree.
To use your insight here, this is where I claim we have to put the effort in to examine situations in detail - i.e. define the "wall".
I do a lot of "transcription" editing. It typically takes me 4 hours to edit a 1 hour meeting transcript. But, again, a "human", with any I.Q., that is not familiar with the topic matter of the discussion, can easily produce "wrong" summaries.
All AI applications, that create such "transcriptions", as far as I know, are directed by an "AI prompt", which is created by a "conscious human". Based on my work with "consiousness", we can get a hint about the "wall" by posing the following question, "How would we formulate an "AI prompt" that accurately generates "AI prompts"?
I agree with the examine - IMO the problem is not the errors, but our inability to understand what the errors will be. This also extends to humans (or classes of?). This comparison is where the action is, so to say. The recursive prompting idea seems to be gravely worrisome, as it will enthrone even further bad ideas.
Keith. Again, I agree with you. But, that’s my point. Our goal should not yet be the outcome. The goal should be understanding the “logic” that creates the “inability to understand what the errors will be.” Ironically, I think we’ll find it is not much different from the reasons so many differences in views in science and social understanding continue to evade us.
As for your “gravely worrisome” concern about “recursive prompting”, again, I’m not pushing to understand how to actualize it. I’m raising the “recursive” model as a tool to understand it. The reason I’m suggesting it is, in my new model for human thinking, it was the understanding of “recursion” in the human brain that answered so many questions. [ https://www.academia.edu/112492199/A3_A_New_Theory_of_Consciousness ]